Adversarial algorithm matching original paper's implementation#770
Draft
taufeeque9 wants to merge 56 commits intomasterfrom
Draft
Adversarial algorithm matching original paper's implementation#770taufeeque9 wants to merge 56 commits intomasterfrom
taufeeque9 wants to merge 56 commits intomasterfrom
Conversation
This change just made some error messages go away indicating the missing imitation.algorithms.dagger.ExponentialBetaSchedule but it did not fix the root cause.
This reverts commit 8b55134.
Codecov Report
@@ Coverage Diff @@
## master #770 +/- ##
==========================================
+ Coverage 96.33% 96.37% +0.03%
==========================================
Files 93 93
Lines 8789 8846 +57
==========================================
+ Hits 8467 8525 +58
+ Misses 322 321 -1
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
ernestum
reviewed
Aug 29, 2023
Collaborator
ernestum
left a comment
There was a problem hiding this comment.
Just some nits. Throughout review will follow.
| assert buffer.actions is not None | ||
| obs = buffer.observations | ||
| next_obs = obs[1:] | ||
| next_obs = np.concatenate([next_obs, obs[-1:]], axis=0) # last obs not available |
Collaborator
There was a problem hiding this comment.
Easier to read if you do:
ext_obs = np.concatenate([obs[1:], obs[-1:]], axis=0)| next_obs = np.concatenate([next_obs, obs[-1:]], axis=0) # last obs not available | ||
| actions = buffer.actions | ||
| dones = buffer.episode_starts | ||
| dones = np.roll(dones, -1, axis=0) |
Collaborator
There was a problem hiding this comment.
same as above: pull buffer.episode_starts in this line.
|
|
||
| self.venv_buffering = wrappers.BufferingWrapper(self.venv) | ||
|
|
||
| self.disc_trainer_callback = TrainDiscriminatorCallback(self) |
Collaborator
There was a problem hiding this comment.
Why not define the gen_callback here like this:
self.gen_callback: List[callbacks.BaseCallback] = [self.disc_trainer_callback]and then just append it down in the else block?
And while you are at it rename it to use a plural because it is actually more than one callback. E.g. self.gen_callbacks.
5c23650 to
b01d51b
Compare
9fac235 to
0ac66d4
Compare
0ac66d4 to
ce8c87d
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Description
This PR updates the adversarial algorithm by training the discriminator between collecting the rollouts of the generator and training the generator. This matches the reference implementation provided in Algorithm 1 of the AIRL paper.
The modification is done by implementing the
TrainDiscriminatorCallback, which is called to train the discriminator after collecting rollouts through thecallback.on_rollout_end(). The callback first stores the latest rollout in the replay buffer, which is then used to train the discriminator. Once the discriminator is trained, the callback updates the generator's rollout/replay buffer by updating the rewards using the latest discriminator.Note that we must also update the advantages and returns in the rollout buffer of the on-policy algorithms upon updating the rewards. This is tricky to do since information like
valueanddoneon the last observations of the rollouts is not stored in the rollout buffer. These are obtained in this PR by using the original advantages and rewards. A test of whether it produces correct values needs to be added.Testing
All the tests for adversarial algorithms run successfully.