2024-11-13T14:47:48.129Z | <Jose J Palacios-Perez> I got a prototype working: the following is the scenario 8 OSD and 3 Seastar reactor threads per OSD: each line corresponds to the arguments for the invocation of `ceph conf set "osd.$osd" crimson_seastar_cpu_cores "$bottom_cpu-$top_cpu"` one line per socket, so the first line allocates 2 cores from socket 0, the next line allocates a single core from socket 1, and so on, until all the OSDs have been configured. Then the list of cores to disable correspond to the HT siblings of the above list. Notice how the reminder core is alternating between sockets, so to achieve balance.
```(3.12) jjperez@Joses-Air:~/Work/cephdev/ceph-aprg/bin
[14:39:27]$ [main] # python3 balance-cpu.py -u /tmp/numa_nodes.json -v
0,1
28,28
2,2
29,30
3,4
31,31
5,5
32,33
6,7
34,34
8,8
35,36
9,10
37,37
11,11
38,39
Cores to disable: [56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95]```
Next I will integrate so vstart.sh can use this information when provided with the new flag "--crimson-balance-cpu".
I will run some tests and print the grids to verify. |
2024-11-13T21:12:27.359Z | <Jose J Palacios-Perez> My understanding is that cyanstore uses only RAM rather than physical devices (disks) so I was assuming that in such case we have "pure reactors", no blocking threads (like Alien/blusetore). In the pure reactor case, I think we might see immediate benefit of this balanced CPU allocation vs unbalanced (assuming that the reactors allocate memory local to the CPU core socket, aka NUMA node). It might be that when using Bluestore, since the Alien threads are blocking we might probably not seen much difference -- I'll launch the tests within this week to get some figures.
I think that when we disable the HT siblings of the cores that the reactors are running, no other thread would use them.
```Here is a snippet of how vstart.sh can consume the above CPU allocation when configuring OSDs:
/ceph -c config.conf config set osd.0 crimson_seastar_cpu_cores 0-1
/ceph -c config.conf config set osd.0 crimson_seastar_cpu_cores 28-28
/ceph -c config.conf config set osd.1 crimson_seastar_cpu_cores 2-2
/ceph -c config.conf config set osd.1 crimson_seastar_cpu_cores 29-30
/ceph -c config.conf config set osd.2 crimson_seastar_cpu_cores 3-4
/ceph -c config.conf config set osd.2 crimson_seastar_cpu_cores 31-31
/ceph -c config.conf config set osd.3 crimson_seastar_cpu_cores 5-5
/ceph -c config.conf config set osd.3 crimson_seastar_cpu_cores 32-33
/ceph -c config.conf config set osd.4 crimson_seastar_cpu_cores 6-7
/ceph -c config.conf config set osd.4 crimson_seastar_cpu_cores 34-34
/ceph -c config.conf config set osd.5 crimson_seastar_cpu_cores 8-8
/ceph -c config.conf config set osd.5 crimson_seastar_cpu_cores 35-36
/ceph -c config.conf config set osd.6 crimson_seastar_cpu_cores 9-10
/ceph -c config.conf config set osd.6 crimson_seastar_cpu_cores 37-37
/ceph -c config.conf config set osd.7 crimson_seastar_cpu_cores 11-11
/ceph -c config.conf config set osd.7 crimson_seastar_cpu_cores 38-39```
I have not found whether ceph-conf supports an extended syntax (eg. similar to taskset that supports comma separated lists), so assuming that these sentences per OSD works, we will see shortly 🤞 |
2024-11-13T21:12:51.194Z | <Jose J Palacios-Perez> My understanding is that cyanstore uses only RAM rather than physical devices (disks) so I was assuming that in such case we have "pure reactors", no blocking threads (like Alien/blusetore). In the pure reactor case, I think we might see immediate benefit of this balanced CPU allocation vs unbalanced (assuming that the reactors allocate memory local to the CPU core socket, aka NUMA node). It might be that when using Bluestore, since the Alien threads are blocking we might probably not seen much difference -- I'll launch the tests within this week to get some figures.
I think that when we disable the HT siblings of the cores that the reactors are running, no other thread would use them.
Here is a snippet of how vstart.sh can consume the above CPU allocation when configuring OSDs:
```/ceph -c config.conf config set osd.0 crimson_seastar_cpu_cores 0-1
/ceph -c config.conf config set osd.0 crimson_seastar_cpu_cores 28-28
/ceph -c config.conf config set osd.1 crimson_seastar_cpu_cores 2-2
/ceph -c config.conf config set osd.1 crimson_seastar_cpu_cores 29-30
/ceph -c config.conf config set osd.2 crimson_seastar_cpu_cores 3-4
/ceph -c config.conf config set osd.2 crimson_seastar_cpu_cores 31-31
/ceph -c config.conf config set osd.3 crimson_seastar_cpu_cores 5-5
/ceph -c config.conf config set osd.3 crimson_seastar_cpu_cores 32-33
/ceph -c config.conf config set osd.4 crimson_seastar_cpu_cores 6-7
/ceph -c config.conf config set osd.4 crimson_seastar_cpu_cores 34-34
/ceph -c config.conf config set osd.5 crimson_seastar_cpu_cores 8-8
/ceph -c config.conf config set osd.5 crimson_seastar_cpu_cores 35-36
/ceph -c config.conf config set osd.6 crimson_seastar_cpu_cores 9-10
/ceph -c config.conf config set osd.6 crimson_seastar_cpu_cores 37-37
/ceph -c config.conf config set osd.7 crimson_seastar_cpu_cores 11-11
/ceph -c config.conf config set osd.7 crimson_seastar_cpu_cores 38-39```
I have not found whether ceph-conf supports an extended syntax (eg. similar to taskset that supports comma separated lists), so assuming that these sentences per OSD works, we will see shortly 🤞 |