seals User Guide
The Suite of Environments for Algorithms that Learn Specifications, or seals, is a toolkit for evaluating specification learning algorithms, such as reward or imitation learning. The environments are compatible with Gym, but are designed to test algorithms that learn from user data, without requiring a procedurally specified reward function.
There are two types of environments in seals:
Diagnostic Tasks which test individual facets of algorithm performance in isolation.
Renovated Environments, adaptations of widely-used benchmarks such as MuJoCo continuous control tasks to be suitable for specification learning benchmarks. In particular, this involves removing any side-channel sources of reward information (such as episode boundaries, the score appearing in the observation, etc) and including all the information needed to compute the reward in the observation space.
seals is under active development and we intend to add more categories of tasks soon.
Installation Instructions
To install the latest release from PyPi, run:
pip install seals
We make releases periodically, but if you wish to use the latest version of the code, you can always install directly from Git master:
pip install git+https://github.com/HumanCompatibleAI/seals.git
seals has optional dependencies needed by some subset of environments. In particular, to use MuJoCo environments, you will need to install MuJoCo 1.5 and then run:
pip install seals[mujoco]
You may need to install some other binary dependencies: see the instructions in Gym and mujoco-py for further information.
You can also use our Docker image which includes all necessary binary dependencies. You can either
build it from the Dockerfile
, or by downloading a pre-built image:
docker pull humancompatibleai/seals:base
Diagnostic Tasks
Diagnostic tasks test individual facets of algorithm performance in isolation.
Branching
Gym ID: seals/Branching-v0
EarlyTerm
Gym ID: seals/EarlyTermPos-v0
and seals/EarlyTermNeg-v0
InitShift
Gym ID: seals/InitShiftTrain-v0
and seals/InitShiftTest-v0seals/EarlyTermPos-v0
LargestSum
Gym ID: seals/LargestSum-v0
NoisyObs
Gym ID: seals/NoisyObs-v0
Parabola
Gym ID: seals/Parabola-v0
ProcGoal
Gym ID: seals/ProcGoal-v0
RiskyPath
Gym ID: seals/RiskyPath-v0
Sort
Gym ID: seals/Sort-v0
Renovated Environments
These environments are adaptations of widely-used reinforcement learning benchmarks from Gym, modified to be suitable for benchmarking specification learning algorithms. In particular, we:
Make episodes fixed length. Since episode termination conditions are often correlated with reward, variable-length episodes provide a side-channel of reward information that algorithms can exploit. Critically, episode boundaries do not exist outside of simulation: in the real-world, a human must often “reset” the RL algorithm.
Moreover, many algorithms do not properly handle episode termination, and so are biased towards shorter or longer episode boundaries. This confounds evaluation, making some algorithms appear spuriously good or bad depending on if their bias aligns with the task objective.
For most tasks, we make the episode fixed length simply by removing the early termination condition. In some environments, such as MountainCar, it does not make sense to continue after the terminal state: in this case, we make the terminal state an absorbing state that is repeated until the end of the episode.
Ensure observations include all information necessary to compute the ground-truth reward function. For some environments, this has required augmenting the observation space. We make this modification to make RL and specification learning of comparable difficulty in these environments. While in general both RL and specification learning may need to operate in partially observable environments, the observations in these relatively simple environments were typically engineered to make RL easy: for a fair comparison, we must therefore also provide reward learning algorithms with sufficient features to recover the reward.
In the future, we intend to add Atari tasks with the score masked, another reward side-channel.
Classic Control
CartPole
Gym ID: seals/CartPole-v0
MountainCar
Gym ID: seals/MountainCar-v0
MuJoCo
Ant
Gym ID: seals/Ant-v0
HalfCheetah
Gym ID: seals/HalfCheetah-v0
Hopper
Gym ID: seals/Hopper-v0
Humanoid
Gym ID: seals/Humanoid-v0
Swimmer
Gym ID: seals/Swimmer-v0
Walker2d
Gym ID: seals/Walker2d-v0
Base Environments
Utilities
Miscellaneous utilities.
- class seals.util.AbsorbAfterDoneWrapper(env, absorb_reward=0.0, absorb_obs=None)[source]
Bases:
Wrapper
Transition into absorbing state instead of episode termination.
When the environment being wrapped returns done=True, we return an absorbing observation. This wrapper always returns done=False.
A convenient way to add absorbing states to environments like MountainCar.
- __init__(env, absorb_reward=0.0, absorb_obs=None)[source]
Initialize AbsorbAfterDoneWrapper.
- Parameters
env – The wrapped Env.
absorb_reward – The reward returned at the absorb state.
absorb_obs – The observation returned at the absorb state. If None, then repeat the final observation before absorb.
- step(action)[source]
Advance the environment by one step.
This wrapped step() always returns done=False.
After the first done is returned by the underlying Env, we enter an artificial absorb state.
In this artificial absorb state, we stop calling self.env.step(action) (i.e. the action argument is entirely ignored) and we return fixed values for obs, rew, done, and info. The values of obs and rew depend on initialization arguments. info is always an empty dictionary.
- class seals.util.AutoResetWrapper(env)[source]
Bases:
Wrapper
Hides done=True and auto-resets at the end of each episode.
- class seals.util.ObsCastWrapper(env, dtype)[source]
Bases:
Wrapper
Cast observations to specified dtype.
Some external environments return observations of a different type than the declared observation space. Where possible, this should be fixed upstream, but casting can be a viable workaround – especially when the returned observations are higher resolution than the observation space.
- seals.util.get_gym_max_episode_steps(env_name)[source]
Get the max_episode_steps attribute associated with a gym Spec.
- Return type
Optional
[int
]
- seals.util.grid_transition_fn(state, action, x_bounds=(-inf, inf), y_bounds=(-inf, inf))[source]
Returns transition of a deterministic gridworld.
Agent is bounded in the region limited by x_bounds and y_bounds, ends inclusive.
(0, 0) is interpreted to be top-left corner.
Actions: 0: Right 1: Down 2: Left 3: Up 4: Stay put
- seals.util.make_env_no_wrappers(env_name, **kwargs)[source]
Gym sometimes wraps envs in TimeLimit before returning from gym.make().
This helper method builds directly from spec to avoid this wrapper.
- Return type
Env
Helpers for unit-testing environments
Citing seals
To cite this project in publications:
@misc{seals,
author = {Adam Gleave and Pedro Freire and Steven Wang and Sam Toyer},
title = {{seals}: Suite of Environments for Algorithms that Learn Specifications},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/HumanCompatibleAI/seals}},
}