You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+86-1Lines changed: 86 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,90 @@
1
1
# Cascade
2
2
3
+
## Benchmarking
4
+
5
+
Requirements:
6
+
- Docker
7
+
- Conda
8
+
- Local flink client
9
+
10
+
1. First create the conda environment with:
11
+
12
+
```
13
+
conda env create -f environment.yml
14
+
```
15
+
16
+
2. Activate the environment with:
17
+
18
+
```
19
+
conda activate cascade_env
20
+
```
21
+
22
+
3. Start the Kafka and Pyflink local clusters
23
+
24
+
```
25
+
docker compose up
26
+
```
27
+
28
+
This will launch:
29
+
30
+
- a Kafka broker at `localhost:9092` (`kafka:9093` for inter-docker communication!) and,
31
+
- a [Kafbat UI](https://github.com/kafbat/kafka-ui) at http://localhost:8080
32
+
- a local Flink cluster with `PyFlink` and all requirements, with a ui at http://localhost:8081
33
+
34
+
By default the flink cluster will run with 16 task slots. This can be changed
35
+
setting the `TASK_SLOTS` enviroment variable, for example:
36
+
37
+
```
38
+
TASK_SLOTS=32 docker compose up
39
+
```
40
+
41
+
You could also scale up the number of taskmanagers, each with the same defined
42
+
number of task slots (untested):
43
+
44
+
```
45
+
docker compose up --scale taskmanager=3
46
+
```
47
+
48
+
Once everything has started (for example, you can see the web UIs running), you
49
+
can upload the benchmark job to the cluster. Note that the Kafka topics must be
50
+
emptied first, otherwise the job will immediately start consuming old events.
51
+
You can use the Kafbat UI for this, for example by deleting topics or purging
52
+
messages. To start the job, first navigate to the cascade repo directory e.g.
53
+
`cd /path/to/cascade`. Then run the following command, where `X` is the default
54
+
parallelism desired:
55
+
56
+
```
57
+
flink run --pyFiles /path/to/cascade/src,/path/to/cascade --pyModule deathstar_movie_review.demo -p X
58
+
```
59
+
60
+
> This command runs `FlinkRuntime.init`, which requires the location of a
61
+
> flink-python jarfile.
62
+
> The location is currently hardcoded in `src/cascade/runtime/flink_runtime` and
63
+
> should be changed based on your environment. The jar file is included as part
64
+
> of the flink installation itself, at https://flink.apache.org/downloads/ (1.20.1).
65
+
66
+
Once the job is submitted, you can start the benchmark. Open another terminal in
67
+
the same directory (and conda environment) and run:
68
+
69
+
```
70
+
python -m deathstar_movie_review.start_benchmark
71
+
```
72
+
73
+
This will start the benchmark by sending events to Kafka. The first phase will
74
+
initialise the state required for the benchmark, and is not measured. The second
75
+
phase starts the actual becnhmark.
76
+
77
+
78
+
### Notes
79
+
80
+
Currently trying to scale up higher than `-p 16`, however I ran into the
81
+
following issue on `-p 64` with `TASK_SLOTS=128`, more configuration might be required?
82
+
83
+
```
84
+
Caused by: java.io.IOException: Insufficient number of network buffers: required 65, but only 38 available. The total number of network buffers is currently set to 4096 of 32768 bytes each. You can increase this number by setting the configuration keys 'taskmanager.memory.network.fraction', 'taskmanager.memory.network.min', and 'taskmanager.memory.network.max'.
85
+
```
86
+
87
+
3
88
## Development
4
89
5
90
Cascade should work with Python 3.10 / 3.11 although other versions could work. Dependencies should first be installed with:
@@ -8,7 +93,7 @@ Cascade should work with Python 3.10 / 3.11 although other versions could work.
8
93
pip install -r requirements.txt
9
94
```
10
95
11
-
## Testing
96
+
## (old) Testing
12
97
13
98
The `pip install` command should have installed a suitable version of `pytest`.
0 commit comments