-
Notifications
You must be signed in to change notification settings - Fork 63
Add parallel tuning on multiple remote GPUs using Ray #328
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
|
The current parallel runner works. I've been able to run on multiple GPUs on DAS6-VU and DAS6-Leiden. There are several remaining problems:
|
|
Most tests pass now. On my system it is just the |
|
I have resolved a couple of issues with sequential tuning that had arisen from the changes to the 'eval_all' costfunc. I haven't actually tested the parallel runner yet. But there are couple of other things I need to attend to now. |
07b599f to
40a956e
Compare
2418097 to
2b61ddb
Compare
|



Working on a simple parallel runner that uses Ray to distribute the benchmarking of different configurations to remote Ray workers.