強化學習--Pytorch--DQN

DQN的學習效果還是很驚豔的,首先放上本次實驗的代碼。和官方給出的例子一樣,是託舉平衡杆的問題。
給出視頻鏈接:強化學習DQN

import torch
import torch.nn as nn
import torch.nn.functional as F
import numpy as np
from torch.autograd import Variable
import gym
 
# 超參數
BATCH_SIZE = 32
LR = 0.01  # learning rate
# 強化學習的參數
EPSILON = 0.9  # greedy policy
GAMMA = 0.9  # reward discount
TARGET_REPLACE_ITER = 200  # target update frequency
MEMORY_CAPACITY = 2000
# 導入實驗環境
env = gym.make('CartPole-v0')
env = env.unwrapped
N_ACTIONS = env.action_space.n
N_STATES = env.observation_space.shape[0]
 
 
class Net(nn.Module):
    def __init__(self, ):
        super(Net, self).__init__()
        self.fc1 = nn.Linear(N_STATES, 10)
        self.fc1.weight.data.normal_(0, 0.1)  # 初始化
        self.out = nn.Linear(10, N_ACTIONS)
        self.out.weight.data.normal_(0, 0.1)  # 初始化
 
    def forward(self, x):
        x = self.fc1(x)
        x = F.relu(x)
        actions_value = self.out(x)
        return actions_value
 
class DQN(object):
    def __init__(self):
        self.eval_net, self.target_net = Net(), Net()
        # 記錄學習到多少步
        self.learn_step_counter = 0  # for target update
        self.memory_counter = 0  # for storing memory
        # 初始化memory
        self.memory = np.zeros((MEMORY_CAPACITY, N_STATES * 2 + 2))
        self.optimizer = torch.optim.Adam(self.eval_net.parameters(), lr=LR) 
        self.loss_func = nn.MSELoss()
 
    def choose_action(self, x):
        x = Variable(torch.unsqueeze(torch.FloatTensor(x), 0))
        if np.random.uniform() < EPSILON:
            action_value = self.eval_net.forward(x)
            action = torch.max(action_value, 1)[1].data.numpy()[0]
        else: # random
            action = np.random.randint(0, N_ACTIONS)
        return action
 
    # s:當前狀態, a:動作, r:reward獎勵, s_:下一步狀態
    def store_transaction(self, s, a, r, s_):
        transaction = np.hstack((s, [a, r], s_))   
        # replace the old memory with new memory
        index = self.memory_counter % MEMORY_CAPACITY
        self.memory[index, :] = transaction
        self.memory_counter += 1
 
    def learn(self):
        # target net update
        if self.learn_step_counter % TARGET_REPLACE_ITER == 0:
            self.target_net.load_state_dict(self.eval_net.state_dict()) 
        self.learn_step_counter += 1
        sample_index = np.random.choice(MEMORY_CAPACITY, BATCH_SIZE)   #從2000中隨機抽取32個樣本
        b_memory = self.memory[sample_index, :]        #通過索引找到對應的值
        b_s = Variable(torch.FloatTensor(b_memory[:, :N_STATES]))   
        b_a = Variable(torch.LongTensor(b_memory[:, N_STATES: N_STATES+1].astype(int)))
        b_r = Variable(torch.FloatTensor(b_memory[:, N_STATES + 1: N_STATES+2]))
        b_s_ = Variable(torch.FloatTensor(b_memory[:, -N_STATES: ]))
 
        q_eval = self.eval_net(b_s).gather(1, b_a)
        q_next = self.target_net(b_s_).detach()
        # here = q_next.max(1)[0]
        # there = torch.reshape(here,(32,1))
        # print(there)
        # there = torch.transpose(here,0,1)
        q_target = b_r + GAMMA * q_next.max(1)[0].reshape((32,1))
        # here = here*GAMMA
        loss = self.loss_func(q_eval, q_target)
        self.optimizer.zero_grad()
        loss.backward()
        self.optimizer.step()
 
 
dqn = DQN()
print('\nCollecting experience...')
for i_episode in range(4000):
    s = env.reset()
    while True:
        env.render()
 
        a = dqn.choose_action(s)
        # take action
        s_, r, done, info = env.step(a)
 
        # modify the reward
        x, x_dot, theta, theta_dot = s_
        r1 = (env.x_threshold - abs(x)) / env.x_threshold - 0.8
        r2 = (env.theta_threshold_radians - abs(theta)) / env.theta_threshold_radians - 0.5
        r = r1 + r2
 
        dqn.store_transaction(s, a, r, s_)
        print(str(dqn.memory_counter) +'\t'+ str(i_episode))
 
        if dqn.memory_counter > MEMORY_CAPACITY:
            dqn.learn()
 
        if done:
            break
        s = s_
  1. DQN如上文所說,採用的是兩個神經網絡,在程序中爲eval_nettarget_net,兩個神經網絡具有相同的結構,不同之處是一個實時更新權重,一個間隔一定週期更新權重TARGET_REPLACE_ITER
  2. DQN的精髓之處在於 Experience learning 和 Fix-Q target。首先,程序剛開始運行的時候,前2000步都是在積累經驗,這個地方和記憶池是一個容量大小。也就是說前2000步,eval_net是一直都有輸出的,給定state,輸出action。然後通過Q-learning的思想進行貪婪選擇,執行。而 target_net此時沒有參與(有權重初值的)。當2000步驟之後,開始學習,也就是開始更新神經網絡了。
  3. 更新神經網絡的時候,先從 記憶池 中隨機抽取一定的記憶(batch = 32),然後用s輸入給 eval_net,將 s_ 輸入給target_net。得到 q_eval 和 q_next, 然後計算loss, 反向更新神經網絡權重。這裏需要注意的是,target_net 並不參與權重的更新,仍然保留着之前的參數,等到一定的間隔時候,才進行神經網絡權值的直接copy。
  4. 每次訓練神經網絡的時候都取出一批數據進行訓練,一方面隨機取出,大大減少了數據之間的相關性,另一方面,加快了訓練的步伐,效果還是挺明顯的。
發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章