圖神經網絡框架DGL學習——101(入門)

圖神經網絡框架DGL學習——101

關於DGL

DGL是一個主流的開源的圖神經網絡,支持tensorflow, torch等語言,用的較多的是torch。具體介紹,請見官方主頁:https://docs.dgl.ai/index.html

101(入門)

圖神經網絡的幾個關鍵流程:
1.圖的構建
2.特徵傳遞給邊或者節點
3.圖神經網絡模型的構建
4.模型訓練
5.模型可視化

DGL可以幫助我們更快的建立一個圖神經網絡,主要體現在圖的構建、特徵賦予節點/邊、自帶各類圖神經網絡層、可視化上。以下是官方文檔的入門教程代碼。

一、圖的構建

“Zachary’s karate club” 問題爲例。Zachary’s karate club有34個成員,下圖代表34個成員之間的社會聯繫,分裂成兩個團體。已知0號成員和34號成員分別屬於兩個團體(黃色/紅色)。需要根據34各成員的社會聯繫圖,預測其他成員的團體歸屬。所以,這是一個節點層面的分類問題。
 “Zachary’s karate club” 問題
如何構建一個圖呢?首先,找出每一個聯繫(邊)的起始節點src和終止節點dst,分別形成數組,用於描述圖中的關係。然後使用dgl.DGLGraph()函數構建圖,代買如下:

import dgl
import numpy as np

def build_karate_club_graph():
    # All 78 edges are stored in two numpy arrays. One for source endpoints
    # while the other for destination endpoints.
    src = np.array([1, 2, 2, 3, 3, 3, 4, 5, 6, 6, 6, 7, 7, 7, 7, 8, 8, 9, 10, 10,
        10, 11, 12, 12, 13, 13, 13, 13, 16, 16, 17, 17, 19, 19, 21, 21,
        25, 25, 27, 27, 27, 28, 29, 29, 30, 30, 31, 31, 31, 31, 32, 32,
        32, 32, 32, 32, 32, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33,
        33, 33, 33, 33, 33, 33, 33, 33, 33, 33])
    dst = np.array([0, 0, 1, 0, 1, 2, 0, 0, 0, 4, 5, 0, 1, 2, 3, 0, 2, 2, 0, 4,
        5, 0, 0, 3, 0, 1, 2, 3, 5, 6, 0, 1, 0, 1, 0, 1, 23, 24, 2, 23,
        24, 2, 23, 26, 1, 8, 0, 24, 25, 28, 2, 8, 14, 15, 18, 20, 22, 23,
        29, 30, 31, 8, 9, 13, 14, 15, 18, 19, 20, 22, 23, 26, 27, 28, 29, 30,
        31, 32])
    # Edges are directional in DGL; Make them bi-directional.
    u = np.concatenate([src, dst])
    v = np.concatenate([dst, src])
    # Construct a DGLGraph
    return dgl.DGLGraph((u, v))

G = build_karate_club_graph()
print('We have %d nodes.' % G.number_of_nodes())
print('We have %d edges.' % G.number_of_edges())    

二、特徵傳遞給邊或者節點

在圖神經網絡中,特徵是賦給邊或者節點的。對於 “Zachary’s karate club” 問題,是要賦給節點。這裏採用的是5維可訓練的嵌入變量對34個節點進行賦值。

# In DGL, you can add features for all nodes at once, using a feature tensor that
# batches node features along the first dimension. The code below adds the learnable
# embeddings for all nodes:

import torch
import torch.nn as nn
import torch.nn.functional as F

embed = nn.Embedding(34, 5)  # 34 nodes with embedding dim equal to 5
G.ndata['feat'] = embed.weight


# print out node 2's input feature
print(G.ndata['feat'][2])

# print out node 10 and 11's input features
print(G.ndata['feat'][[10, 11]])

三、圖神經網絡模型的構建

這裏基於GDL的GraphConv,構建一個簡單的兩層的圖卷積神經網絡。

from dgl.nn.pytorch import GraphConv
class GCN(nn.Module):
    def __init__(self, in_feats, hidden_size, num_classes):
        super(GCN, self).__init__()
        self.conv1 = GraphConv(in_feats, hidden_size)
        self.conv2 = GraphConv(hidden_size, num_classes)

    def forward(self, g, inputs):
        h = self.conv1(g, inputs)
        h = torch.relu(h)
        h = self.conv2(g, h)
        return h

# The first layer transforms input features of size of 5 to a hidden size of 5.
# The second layer transforms the hidden layer and produces output features of
# size 2, corresponding to the two groups of the karate club.
net = GCN(5, 5, 2)

四、模型訓練

模型的數據輸入就是剛纔建立的可訓練的嵌入向量。標籤,由於只知道節點0和節點33的標籤,而其他節點的標籤並不清楚,所以應該是半監督學習問題。因此只能對節點33和節點0進行標記。

inputs = embed.weight
labeled_nodes = torch.tensor([0, 33])  # only the instructor and the president nodes are labeled
labels = torch.tensor([0, 1])  # their labels are different

模型訓練:

import itertools

optimizer = torch.optim.Adam(itertools.chain(net.parameters(), embed.parameters()), lr=0.01)
all_logits = [] #用於記錄訓練過程中,各個節點的分類概率
for epoch in range(50):
    logits = net(G, inputs) #圖卷積網絡輸出
    # we save the logits for visualization later
    all_logits.append(logits.detach())
    logp = F.log_softmax(logits, 1) #分類
    # we only compute loss for labeled nodes
    loss = F.nll_loss(logp[labeled_nodes], labels) #只計算已經標記節點的損失

    optimizer.zero_grad()
    loss.backward()
    optimizer.step()

    print('Epoch %d | Loss: %.4f' % (epoch, loss.item()))

五、模型可視化

使用netweokx進行。

圖的可視化, 結果如下圖:

import networkx as nx
# Since the actual graph is undirected, we convert it for visualization
# purpose.
nx_G = G.to_networkx().to_undirected()
# Kamada-Kawaii layout usually looks pretty for arbitrary graphs
pos = nx.kamada_kawai_layout(nx_G)
nx.draw(nx_G, pos, with_labels=True, node_color=[[.7, .7, .7]])

在這裏插入圖片描述

模型訓練過程可視化:

import matplotlib.animation as animation
import matplotlib.pyplot as plt

def draw(i):
    cls1color = '#00FFFF'
    cls2color = '#FF00FF'
    pos = {}
    colors = []
    for v in range(34):
        pos[v] = all_logits[i][v].numpy()
        cls = pos[v].argmax()
        colors.append(cls1color if cls else cls2color)
    ax.cla()
    ax.axis('off')
    ax.set_title('Epoch: %d' % i)
    nx.draw_networkx(nx_G.to_undirected(), pos, node_color=colors,
            with_labels=True, node_size=300, ax=ax)

fig = plt.figure(dpi=150)
fig.clf()
ax = fig.subplots()
draw(0)  # draw the prediction of the first epoch
plt.close()

在這裏插入圖片描述
動態圖片:

ani = animation.FuncAnimation(fig, draw, frames=len(all_logits), interval=200)

。。。不知道如何在CSDN中顯示動態圖,結果省略。
另外,在pycharm和jupyter notebook中動態圖的顯示,需要設置一下,否則顯示不出來。
參考:https://blog.csdn.net/qq_42182596/article/details/106528274
https://www.jianshu.com/p/c6b362fde21c
當然,你一定找得到你的Python Scientific,好像社區版是沒有的。。。。

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章