強化學習環境學習-gym[atari]-paper中的相關設置

0. gym 核心

這部分的代碼在gym/core.py中,

原始基類爲Env,主要可調用step,reset,render,close,seed幾個方法,大體框架如下

class Env(object):
    def reset(self):
        pass
    def step(self, action):
        pass
    def render(self, mode='human'):
        pass
    def close(self):
        pass
    def seed(self, seed=None):
        pass

同時Wrapper包裝器繼承Env類

class Wrapper(Env):
    def step(self, action):
        return self.env.step(action)

    def reset(self, **kwargs):
        return self.env.reset(**kwargs)

    def render(self, mode='human', **kwargs):
        return self.env.render(mode, **kwargs)

    def close(self):
        return self.env.close()

    def seed(self, seed=None):
        return self.env.seed(seed)

包裝器的作用在於我們想定製新的環境配置時可以直接繼承Wrapper,重寫其中的部分方法,使用時將選擇的遊戲env作爲參數傳遞進去,即可更改相應遊戲環境的配置.

相應的也有observation,reward,action的包裝器,更改對應方法即可

class ObservationWrapper(Wrapper):
    def reset(self, **kwargs):
        observation = self.env.reset(**kwargs)
        return self.observation(observation)

    def step(self, action):
        observation, reward, done, info = self.env.step(action)
        return self.observation(observation), reward, done, info

    def observation(self, observation):
        raise NotImplementedError

1. 環境名

atari中的每個遊戲環境通過後綴名來區分內部的細微區別.

以Pong遊戲爲例,Pong-ram-v0表示其observation爲Atari機器的內存情況(256維向量表示).其它的環境表示observation來自一個210*160的輸入圖片,具體區別可以細分爲下(來自https://www.endtoend.ai/envs/gym/atari/)

Name Frame Skip k Repeat action probability p
Pong-v0 2-4 0.25
Pong-v4 2-4 0
PongDeterministic-v0 4 0.25
PongDeterministic-v4 4 0
PongNoFrameskip-v0 1 0.25
PongNoFrameskip-v4 1 0

其中帶有V0後綴的表示以一定的概率p重複之前動作,不受智能體的控制(sticky actions,增加環境的隨機性,[Revisiting the Arcade]),v4後綴表示這個概率p爲0.中間字段的不同表示智能體每隔k幀做一個動作(同一個動作在k幀中保持,這種設置可以爲避免訓練出的智能體超出人類的反應速率).

使用如下代碼可以看到所有環境:

from gym import envs
env_names = [spec.id for spec in envs.registry.all()] 
for name in sorted(env_names): 
	print(name)

在這裏插入圖片描述

2.增加配置

除了環境自帶的配置外,實驗前常常對環境進行一系列新的配置,通常對gym.Wrapper進行繼承重寫其中的方法

2.1 reset規則

整個Atari遊戲環境是一個確定性的環境,一個智能體可能在確定性環境中表現良好,但可能對一點細小的擾動高度敏感,所以常在設置中增加隨機性

class NoopResetEnv(gym.Wrapper):
    def __init__(self, env, noop_max=30):
        """Sample initial states by taking random number of no-ops on reset.
        No-op is assumed to be action 0.
        """
        gym.Wrapper.__init__(self, env)
        self.noop_max = noop_max
        self.override_num_noops = None
        self.noop_action = 0
        assert env.unwrapped.get_action_meanings()[0] == 'NOOP'

    def reset(self, **kwargs):
        """ Do no-op action for a number of steps in [1, noop_max]."""
        self.env.reset(**kwargs)
        if self.override_num_noops is not None:
            noops = self.override_num_noops
        else:
            noops = self.unwrapped.np_random.randint(1, self.noop_max + 1) #pylint: disable=E1101
        assert noops > 0
        obs = None
        for _ in range(noops):
            obs, _, done, _ = self.env.step(self.noop_action)
            if done:
                obs = self.env.reset(**kwargs)
        return obs

    def step(self, ac):
        return self.env.step(ac)

reset後保持一段隨機步數的空操作

class FireResetEnv(gym.Wrapper):
    def __init__(self, env):
        """Take action on reset for environments that are fixed until firing."""
        gym.Wrapper.__init__(self, env)
        assert env.unwrapped.get_action_meanings()[1] == 'FIRE'
        assert len(env.unwrapped.get_action_meanings()) >= 3

    def reset(self, **kwargs):
        self.env.reset(**kwargs)
        obs, _, done, _ = self.env.step(1)
        if done:
            self.env.reset(**kwargs)
        obs, _, done, _ = self.env.step(2)
        if done:
            self.env.reset(**kwargs)
        return obs

    def step(self, ac):
        return self.env.step(ac)

有時智能體不好學習開火的策略,因此在reset後試圖做出開火的動作(action[1])
https://github.com/openai/baselines/issues/240#issuecomment-391165056 有進一步的討論

2.3 Episode termination

在有些遊戲中一個玩家可能有多條生命,將遊戲的終止作爲訓練episode的終止可能不利於智能體學習到丟失生命的重要性(Mnih et al. (2015))

class EpisodicLifeEnv(gym.Wrapper):
    def __init__(self, env):
        """Make end-of-life == end-of-episode, but only reset on true game over.
        Done by DeepMind for the DQN and co. since it helps value estimation.
        """
        gym.Wrapper.__init__(self, env)
        self.lives = 0
        self.was_real_done  = True

    def step(self, action):
        obs, reward, done, info = self.env.step(action)
        self.was_real_done = done
        # check current lives, make loss of life terminal,
        # then update lives to handle bonus lives
        lives = self.env.unwrapped.ale.lives()
        
        if lives < self.lives and lives > 0:
            # for Qbert sometimes we stay in lives == 0 condition for a few frames
            # so it's important to keep lives > 0, so that we only reset once
            # the environment advertises done.
            done = True
        self.lives = lives
        return obs, reward, done, info

    def reset(self, **kwargs):
        """Reset only when lives are exhausted.
        This way all states are still reachable even though lives are episodic,
        and the learner need not know about any of this behind-the-scenes.
        """
        if self.was_real_done:
            obs = self.env.reset(**kwargs)
        else:
            # no-op step to advance from terminal/lost life state
            obs, _, _, _ = self.env.step(0)
        self.lives = self.env.unwrapped.ale.lives()
        return obs

程序中將was_real_done設置遊戲是否真結束的標誌,而每一次丟失生命作爲done的標誌

儘管這種做法可能教智能體避免死亡,Bellemare et al. (2016b)提到可能對智能體最終性能有害,同時也要考慮到最小化遊戲信息的使用.

2.4 Fram skipping

Atari遊戲默認幀率爲60幀/s,如果我們想自定義幀率,我們選擇使用NoFrameskip版本,然後進行環境配置.

class MaxAndSkipEnv(gym.Wrapper):
    def __init__(self, env, skip=4):
        """Return only every `skip`-th frame"""
        gym.Wrapper.__init__(self, env)
        # most recent raw observations (for max pooling across time steps)
        self._obs_buffer = np.zeros((2,)+env.observation_space.shape, dtype=np.uint8)
        self._skip       = skip

    def step(self, action):
        """Repeat action, sum reward, and max over last observations."""
        total_reward = 0.0
        done = None
        for i in range(self._skip):
            obs, reward, done, info = self.env.step(action)
            if i == self._skip - 2: self._obs_buffer[0] = obs
            if i == self._skip - 1: self._obs_buffer[1] = obs
            total_reward += reward
            if done:
                break
        # Note that the observation on the done=True frame
        # doesn't matter
        max_frame = self._obs_buffer.max(axis=0)

        return max_frame, total_reward, done, info

    def reset(self, **kwargs):
        return self.env.reset(**kwargs)

Frame skipping版本(Naddaf, 2010)的環境代碼如上,代碼以skip=4爲例,其中的每一幀執行同樣的動作,計算累積reward作爲step輸出,最後返回的observation採用max pooling思想,取最近兩個observation的最大作爲輸出(Montfort & Bogost, 2009)

2.5 reward裁剪和observation裁剪

class ClipRewardEnv(gym.RewardWrapper):
    def __init__(self, env):
        gym.RewardWrapper.__init__(self, env)

    def reward(self, reward):
        """Bin reward to {+1, 0, -1} by its sign."""
        return np.sign(reward)

reward裁剪按照reward的正負性將其分爲{+1, 0, -1}(Mnih et al., 2015),防止不同環境下reward的差異大對算法的影響.類似的做法還可以將獲得的reward值除以第一個獲得的非零reward(Bellemare et al., 2013),即假定第一個reward是獨特的.

class WarpFrame(gym.ObservationWrapper):
    def __init__(self, env, width=84, height=84, grayscale=True):
        """Warp frames to 84x84 as done in the Nature paper and later work."""
        gym.ObservationWrapper.__init__(self, env)
        self.width = width
        self.height = height
        self.grayscale = grayscale
        if self.grayscale:
            self.observation_space = spaces.Box(low=0, high=255,
                shape=(self.height, self.width, 1), dtype=np.uint8)
        else:
            self.observation_space = spaces.Box(low=0, high=255,
                shape=(self.height, self.width, 3), dtype=np.uint8)

    def observation(self, frame):
        if self.grayscale:
            frame = cv2.cvtColor(frame, cv2.COLOR_RGB2GRAY)
        frame = cv2.resize(frame, (self.width, self.height), interpolation=cv2.INTER_AREA)
        if self.grayscale:
            frame = np.expand_dims(frame, -1)
        return frame

將原始圖片210x160大小轉換爲84x84大小,同時將彩色圖轉變爲灰度圖

class ScaledFloatFrame(gym.ObservationWrapper):
    def __init__(self, env):
        gym.ObservationWrapper.__init__(self, env)
        self.observation_space = gym.spaces.Box(low=0, high=1, shape=env.observation_space.shape, dtype=np.float32)

    def observation(self, observation):
        # careful! This undoes the memory optimization, use
        # with smaller replay buffers only.
        return np.array(observation).astype(np.float32) / 255.0

將原始觀測的範圍從[0,255]轉換爲[0,1]

84x84圖片float32類型所佔大小爲84x84x4=28,224 bytes,我們知道一個replay buffer一般定義爲10e6大小,如果只計算當前觀測和下一個觀測的大小,所佔內存空間爲2x 28,224x10e6= 56,448 MB,故需要採取內存優化

可以先不進行ScaledFloat轉換,即直接採用[0,255]unint8存放到replay buffer中,佔用內存空間減小到原來的1/4,然後在輸入神經網絡時再縮放到[0,1]

2.6 Frame Stacking

僅使用一幀的圖像作爲observation對智能體來說可能信息不夠,Frame Stack 技術採用前k幀的圖像信息組合爲observation避免環境陷入部分可觀測問題的可能.

class FrameStack(gym.Wrapper):
    def __init__(self, env, k):
        """Stack k last frames.
        Returns lazy array, which is much more memory efficient.
        See Also
        --------
        baselines.common.atari_wrappers.LazyFrames
        """
        gym.Wrapper.__init__(self, env)
        self.k = k
        self.frames = deque([], maxlen=k)
        shp = env.observation_space.shape
        self.observation_space = spaces.Box(low=0, high=255, shape=(shp[:-1] + (shp[-1] * k,)), dtype=env.observation_space.dtype)

    def reset(self):
        ob = self.env.reset()
        for _ in range(self.k):
            self.frames.append(ob)
        return self._get_ob()

    def step(self, action):
        ob, reward, done, info = self.env.step(action)
        self.frames.append(ob)
        return self._get_ob(), reward, done, info

    def _get_ob(self):
        assert len(self.frames) == self.k
        return LazyFrames(list(self.frames))
class LazyFrames(object):
    def __init__(self, frames):
        """This object ensures that common frames between the observations are only stored once.
        It exists purely to optimize memory usage which can be huge for DQN's 1M frames replay
        buffers.
        This object should only be converted to numpy array before being passed to the model.
        You'd not believe how complex the previous solution was."""
        self._frames = frames
        self._out = None

    def _force(self):
        if self._out is None:
            self._out = np.concatenate(self._frames, axis=-1)
            self._frames = None
        return self._out

    def __array__(self, dtype=None):
        out = self._force()
        if dtype is not None:
            out = out.astype(dtype)
        return out

    def __len__(self):
        return len(self._force())

    def __getitem__(self, i):
        return self._force()[..., i]

LazyFrames 是爲了優化1M replay buffer 在內存的性能.

3. 例子

結合上面的各種配置,我們對一般的強化學習Atari前的預設置如下

def make_atari(env_id, max_episode_steps=None):
    env = gym.make(env_id)
    assert 'NoFrameskip' in env.spec.id
    env = NoopResetEnv(env, noop_max=30)
    env = MaxAndSkipEnv(env, skip=4)
    if max_episode_steps is not None:
        env = TimeLimit(env, max_episode_steps=max_episode_steps)
    return env

def wrap_deepmind(env, episode_life=True, clip_rewards=True, frame_stack=False, scale=False):
    """Configure environment for DeepMind-style Atari.
    """
    if episode_life:
        env = EpisodicLifeEnv(env)
    if 'FIRE' in env.unwrapped.get_action_meanings():
        env = FireResetEnv(env)
    env = WarpFrame(env)
    if scale:
        env = ScaledFloatFrame(env)
    if clip_rewards:
        env = ClipRewardEnv(env)
    if frame_stack:
        env = FrameStack(env, 4)
    return env

參考資料

Atari Environments.https://www.endtoend.ai/envs/gym/atari/

Revisiting the Arcade Learning Environment:Evaluation Protocols and Open Problems for General Agents Marlos

openai baseline

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章