ASP.net中用C#開發搜索引擎蜘蛛程

C#特別適合於構造蜘蛛程序,這是因爲它已經內置了HTTP訪問和多線程的能力,而這兩種能力對於蜘蛛程序來說都是非常關鍵的。下面是構造一個蜘蛛程序要解決的關鍵問題:
  ⑴ HTML分析:需要某種HTML解析器來分析蜘蛛程序遇到的每一個頁面。
  ⑵ 頁面處理:需要處理每一個下載得到的頁面。下載得到的內容可能要保存到磁盤,或者進一步分析處理。
  ⑶ 多線程:只有擁有多線程能力,蜘蛛程序才能真正做到高效。
  ⑷ 確定何時完成:不要小看這個問題,確定任務是否已經完成並不簡單,尤其是在多線程環境下。

  一、HTML解析
  C#語言本身不包含解析HTML的能力,但支持XML解析;不過,XML有着嚴格的語法,爲XML設計的解析器對 HTML來說根本沒用,因爲HTML的語法要寬鬆得多。爲此,我們需要自己設計一個HTML解析器。本文提供的解析器是高度獨立的,你可以方便地將它用於其它用C#處理HTML的場合。
  本文提供的HTML解析器由ParseHTML類實現,使用非常方便:首先創建該類的一個實例,然後將它的Source屬性設置爲要解析的HTML文檔:
ParseHTML parse = new ParseHTML();
parse.Source = "<p>Hello World</p>";


  接下來就可以利用循環來檢查HTML文檔包含的所有文本和標記。通常,檢查過程可以從一個測試Eof方法的while循環開始:
while(!parse.Eof())
{
char ch = parse.Parse();


  Parse方法將返回HTML文檔包含的字符--它返回的內容只包含那些非HTML標記的字符,如果遇到了HTML標記,Parse方法將返回0值,表示現在遇到了一個HTML標記。遇到一個標記之後,我們可以用GetTag()方法來處理它。
if(ch==0)
{
HTMLTag tag = parse.GetTag();
}


  一般地,蜘蛛程序最重要的任務之一就是找出各個HREF屬性,這可以藉助C#的索引功能完成。例如,下面的代碼將提取出HREF屬性的值(如果存在的話)。
Attribute href = tag["HREF"];
string link = href.Value;


  獲得Attribute對象之後,通過Attribute.Value可以得到該屬性的值。

二、處理HTML頁面
  下面來看看如何處理HTML頁面。首先要做的當然是下載HTML頁面,這可以通過C#提供的HttpWebRequest類實現:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(m_uri);
response = request.GetResponse();
stream = response.GetResponseStream();


  接下來我們就從request創建一個stream流。在執行其他處理之前,我們要先確定該文件是二進制文件還是文本文件,不同的文件類型處理方式也不同。下面的代碼確定該文件是否爲二進制文件。
if( !response.ContentType.ToLower().StartsWith("text/") )
{
SaveBinaryFile(response);
return null;
}
string buffer = "",line;


  如果該文件不是文本文件,我們將它作爲二進制文件讀入。如果是文本文件,首先從stream創建一個StreamReader,然後將文本文件的內容一行一行加入緩衝區。
reader = new StreamReader(stream);
while( (line = reader.ReadLine())!=null )
{
buffer+=line+"/r/n";
}


  裝入整個文件之後,接着就要把它保存爲文本文件。
SaveTextFile(buffer);


  下面來看看這兩類不同文件的存儲方式。
  二進制文件的內容類型聲明不以"text/"開頭,蜘蛛程序直接把二進制文件保存到磁盤,不必進行額外的處理,這是因爲二進制文件不包含HTML,因此也不會再有需要蜘蛛程序處理的HTML鏈接。下面是寫入二進制文件的步驟。
  首先準備一個緩衝區臨時地保存二進制文件的內容。 byte []buffer = new byte[1024];


  接下來要確定文件保存到本地的路徑和名稱。如果要把一個myhost.com網站的內容下載到本地的c:/test文件夾,二進制文件的網上路徑和名稱是,則本地路徑和名稱應當是c:/test/images/logo.gif。與此同時,我們還要確保c:/test目錄下已經創建了images子目錄。這部分任務由convertFilename方法完成。
string filename = convertFilename( response.ResponseUri );


  convertFilename方法分離HTTP地址,創建相應的目錄結構。確定了輸出文件的名字和路徑之後就可以打開讀取Web頁面的輸入流、寫入本地文件的輸出流。
Stream outStream = File.Create( filename );
Stream inStream = response.GetResponseStream();


  接下來就可以讀取Web文件的內容並寫入到本地文件,這可以通過一個循環方便地完成。
int l;
do
{
l = inStream.Read(buffer,0,
buffer.Length);
if(l>0)
outStream.Write(buffer,0,l);
} while(l>0);


  寫入整個文件之後,關閉輸入流、輸出流。
outStream.Close();
inStream.Close();


  比較而言,下載文本文件更容易一些。文本文件的內容類型總是以"text/"開頭。假設文件已被下載並保存到了一個字符串,這個字符串可以用來分析網頁包含的鏈接,當然也可以保存爲磁盤上的文件。下面代碼的任務就是保存文本文件。
string filename = convertFilename( m_uri );
StreamWriter outStream = new StreamWriter( filename );
outStream.Write(buffer);
outStream.Close();


  在這裏,我們首先打開一個文件輸出流,然後將緩衝區的內容寫入流,最後關閉文件。


  三、多線程
  多線程使得計算機看起來就象能夠同時執行一個以上的操作,不過,除非計算機包含多個處理器,否則,所謂的同時執行多個操作僅僅是一種模擬出來的效果--靠計算機在多個線程之間快速切換達到"同時"執行多個操作的效果。一般而言,只有在兩種情況下多線程才能事實上提高程序運行的速度。第一種情況是計算機擁有多個處理器,第二種情況是程序經常要等待某個外部事件。
  對於蜘蛛程序來說,第二種情況正是它的典型特徵之一,它每發出一個URL請求,總是要等待文件下載完畢,然後再請求下一個URL。如果蜘蛛程序能夠同時請求多個URL,顯然能夠有效地減少總下載時間。
  爲此,我們用DocumentWorker類封裝所有下載一個URL的操作。每當一個DocumentWorker的實例被創建,它就進入循環,等待下一個要處理的URL。下面是DocumentWorker的主循環:
while(!m_spider.Quit )
{
m_uri = m_spider.ObtainWork();
m_spider.SpiderDone.WorkerBegin();
string page = GetPage();
if(page!=null)
ProcessPage(page);
m_spider.SpiderDone.WorkerEnd();
}


  這個循環將一直運行,直至Quit標記被設置成了true(當用戶點擊"Cancel"按鈕時,Quit標記就被設置成true)。在循環之內,我們調用ObtainWork獲取一個URL。ObtainWork將一直等待,直到有一個URL可用--這要由其他線程解析文檔並尋找鏈接才能獲得。Done類利用WorkerBegin和WorkerEnd方法來確定何時整個下載操作已經完成。
  從圖一可以看出,蜘蛛程序允許用戶自己確定要使用的線程數量。在實踐中,線程的最佳數量受許多因素影響。如果你的機器性能較高,或者有兩個處理器,可以設置較多的線程數量;反之,如果網絡帶寬、機器性能有限,設置太多的線程數量其實不一定能夠提高性能。
  四、任務完成了嗎?
  利用多個線程同時下載文件有效地提高了性能,但也帶來了線程管理方面的問題。其中最複雜的一個問題是:蜘蛛程序何時纔算完成了工作?在這裏我們要藉助一個專用的類Done來判斷。
  首先有必要說明一下"完成工作"的具體含義。只有當系統中不存在等待下載的URL,而且所有工作線程都已經結束其處理工作時,蜘蛛程序的工作纔算完成。也就是說,完成工作意味着已經沒有等待下載和正在下載的URL。
  Done類提供了一個WaitDone方法,它的功能是一直等待,直到Done對象檢測到蜘蛛程序已完成工作。下面是WaitDone方法的代碼。
public void WaitDone()
{
Monitor.Enter(this);
while ( m_activeThreads>0 )
{
Monitor.Wait(this);
}
Monitor.Exit(this);
}


  WaitDone方法將一直等待,直到不再有活動的線程。但必須注意的是,下載開始的最初階段也沒有任何活動的線程,所以很容易造成蜘蛛程序一開始就立即停止的現象。爲解決這個問題,我們還需要另一個方法WaitBegin來等待蜘蛛程序進入"正式的"工作階段。一般的調用次序是:先調用WaitBegin,再接着調用WaitDone,WaitDone將等待蜘蛛程序完成工作。下面是WaitBegin的代碼:
public void WaitBegin()
{
Monitor.Enter(this);
while ( !m_started )
{
Monitor.Wait(this);
}
Monitor.Exit(this);
}


  WaitBegin方法將一直等待,直到m_started標記被設置。m_started標記是由 WorkerBegin方法設置的。工作線程在開始處理各個URL之時,會調用WorkerBegin;處理結束時調用WorkerEnd。 WorkerBegin和WorkerEnd這兩個方法幫助Done對象確定當前的工作狀態。下面是WorkerBegin方法的代碼:
public void WorkerBegin()
{
Monitor.Enter(this);
m_activeThreads++;
m_started = true;
Monitor.Pulse(this);
Monitor.Exit(this);
}


  WorkerBegin方法首先增加當前活動線程的數量,接着設置m_started標記,最後調用Pulse方法以通知(可能存在的)等待工作線程啓動的線程。如前所述,可能等待Done對象的方法是WaitBegin方法。每處理完一個URL,WorkerEnd方法會被調用:
public void WorkerEnd()
{
Monitor.Enter(this);
m_activeThreads--;
Monitor.Pulse(this);
Monitor.Exit(this);
}


  WorkerEnd方法減小m_activeThreads活動線程計數器,調用Pulse釋放可能在等待Done對象的線程--如前所述,可能在等待Done對象的方法是WaitDone方法。

  結束語:本文介紹了開發Internet蜘蛛程序的基礎知識,上面提供的源代碼將幫助你進一步深入理解本文的主題。這裏提供的代碼非常靈活,你可以方便地將它用於自己的程序。

 

 

DocumentWorker.cs


using System;
using System.Net;
using System.IO;
using System.Threading;

namespace Spider
{
 /// <summary>
 /// Perform all of the work of a single thread for the spider.
 /// This involves waiting for a URL to becomve available, download
 /// and then processing the page.
 ///
 /// </summary>
 // 完成必須由單個工作線程執行的操作,包括
 // 等待可用的URL,下載和處理頁面
 public class DocumentWorker
 {
  /// <summary>
  /// The base URI that is to be spidered.
  /// </summary>
  // 要掃描的基礎URI
  private Uri m_uri;

  /// <summary>
  /// The spider that this thread "works for"
  /// </summary>
  //
  private Spider m_spider;

  /// <summary>
  /// The thread that is being used.
  /// </summary>
  private Thread m_thread;

  /// <summary>
  /// The thread number, used to identify this worker.
  /// </summary>
  // 線程編號,用來標識當前的工作線程
  private int m_number;
  

  /// <summary>
  /// The name for default documents.
  /// </summary>
  // 缺省文檔的名字
  public const string IndexFile = "index.html";

  /// <summary>
  /// Constructor.
  /// </summary>
  /// <param name="spider">The spider that owns this worker.</param>
  // 構造函數,參數表示擁有當前工作線程的蜘蛛程序
  public DocumentWorker(Spider spider)
  {
   m_spider = spider;
  }

  /// <summary>
  /// This method will take a URI name, such ash /images/blank.gif
  /// and convert it into the name of a file for local storage.
  /// If the directory structure to hold this file does not exist, it
  /// will be created by this method.
  /// </summary>
  /// <param name="uri">The URI of the file about to be stored</param>
  /// <returns></returns>
  // 輸入參數是一個URI名稱,例如/images/blank.gif.
  // 把它轉換成本地文件名稱。如果尚未創建相應的目錄
  // 結構,則創建之
  private string convertFilename(Uri uri)
  {
   string result = m_spider.OutputPath;
   int index1;
   int index2;   

   // add ending slash if needed
   if( result[result.Length-1]!='//' )
    result = result+"//";

   // strip the query if needed

   String path = uri.PathAndQuery;
   int queryIndex = path.IndexOf("?");
   if( queryIndex!=-1 )
    path = path.Substring(0,queryIndex);

   // see if an ending / is missing from a directory only
   
   int lastSlash = path.LastIndexOf('/');
   int lastDot = path.LastIndexOf('.');

   if( path[path.Length-1]!='/' )
   {
    if(lastSlash>lastDot)
     path+="/"+IndexFile;
   }

   // determine actual filename  
   lastSlash = path.LastIndexOf('/');

   string filename = "";
   if(lastSlash!=-1)
   {
    filename=path.Substring(1+lastSlash);
    path = path.Substring(0,1+lastSlash);
    if(filename.Equals("") )
     filename=IndexFile;
   }

   // 必要時創建目錄結構   
   index1 = 1;
   do
   {
    index2 = path.IndexOf('/',index1);
    if(index2!=-1)
    {
     String dirpart = path.Substring(index1,index2-index1);
     result+=dirpart;
     result+="//";
    
    
     Directory.CreateDirectory(result);

     index1 = index2+1;     
    }
   } while(index2!=-1);   

   // attach name
   result+=filename;

   return result;
  }

  /// <summary>
  /// Save a binary file to disk.
  /// </summary>
  /// <param name="response">The response used to save the file</param>
  // 將二進制文件保存到磁盤
  private void SaveBinaryFile(WebResponse response)
  {
   byte []buffer = new byte[1024];

   if( m_spider.OutputPath==null )
    return;

   string filename = convertFilename( response.ResponseUri );
   Stream outStream = File.Create( filename );
   Stream inStream = response.GetResponseStream(); 
   
   int l;
   do
   {
    l = inStream.Read(buffer,0,buffer.Length);
    if(l>0)
     outStream.Write(buffer,0,l);
   }
   while(l>0);
   
   outStream.Close();
   inStream.Close();

  }

  /// <summary>
  /// Save a text file.
  /// </summary>
  /// <param name="buffer">The text to save</param>
  // 保存文本文件
  private void SaveTextFile(string buffer)
  {
   if( m_spider.OutputPath==null )
    return;

   string filename = convertFilename( m_uri );
   StreamWriter outStream = new StreamWriter( filename );
   outStream.Write(buffer);
   outStream.Close();
  }

  /// <summary>
  /// Download a page
  /// </summary>
  /// <returns>The data downloaded from the page</returns>
  // 下載一個頁面
  private string GetPage()
  {
   WebResponse response = null;
   Stream stream = null;
   StreamReader reader = null;

   try
   {
    HttpWebRequest request = (HttpWebRequest)WebRequest.Create(m_uri);
       
    response = request.GetResponse();
    stream = response.GetResponseStream(); 

    if( !response.ContentType.ToLower().StartsWith("text/") )
    {
     SaveBinaryFile(response);
     return null;
    }

    string buffer = "",line;

    reader = new StreamReader(stream);
   
    while( (line = reader.ReadLine())!=null )
    {
     buffer+=line+"/r/n";
    }
   
    SaveTextFile(buffer);
    return buffer;
   }
   catch(WebException e)
   {
    System.Console.WriteLine("下載失敗,錯誤:" + e);
    return null;
   }
   catch(IOException e)
   {
    System.Console.WriteLine("下載失敗,錯誤:" + e);
    return null;
   }
   finally
   {
    if( reader!=null ) reader.Close();
    if( stream!=null ) stream.Close();
    if( response!=null ) response.Close();
   }
  }

  /// <summary>
  /// Process each link encountered. The link will be recorded
  /// for later spidering if it is an http or https docuent,
  /// has not been visited before(determined by spider class),
  /// and is in the same host as the original base URL.
  /// </summary>
  /// <param name="link">The URL to process</param>
  private void ProcessLink(string link)
  {
   Uri url;

   // fully expand this URL if it was a relative link
   try
   {
    url = new Uri(m_uri,link,false);
   }
   catch(UriFormatException e)
   {
    System.Console.WriteLine( "Invalid URI:" + link +" Error:" + e.Message);
    return;
   }

   if(!url.Scheme.ToLower().Equals("http") &&
    !url.Scheme.ToLower().Equals("https") )
    return;

   // comment out this line if you would like to spider
   // the whole Internet (yeah right, but it will try)
   if( !url.Host.ToLower().Equals( m_uri.Host.ToLower() ) )
    return;

   //System.Console.WriteLine( "Queue:"+url );
   m_spider.addURI( url );

 

  }

  /// <summary>
  /// Process a URL
  /// </summary>
  /// <param name="page">the URL to process</param>
  private void ProcessPage(string page)
  {
   ParseHTML parse = new ParseHTML();
   parse.Source = page;

   while(!parse.Eof())
   {
    char ch = parse.Parse();
    if(ch==0)
    {
     Attribute a = parse.GetTag()["HREF"];
     if( a!=null )
      ProcessLink(a.Value);
     
     a = parse.GetTag()["SRC"];
     if( a!=null )
      ProcessLink(a.Value);
    }
   }
  }


  /// <summary>
  /// This method is the main loop for the spider threads.
  /// This method will wait for URL's to become available,
  /// and then process them.
  /// </summary>
  public void Process()
  {
   while(!m_spider.Quit )
   {
    m_uri = m_spider.ObtainWork();
    
    m_spider.SpiderDone.WorkerBegin();
    System.Console.WriteLine("Download("+this.Number+"):"+m_uri);   
    string page = GetPage();
    if(page!=null)
     ProcessPage(page);
    m_spider.SpiderDone.WorkerEnd();
   }
  }

  /// <summary>
  /// Start the thread.
  /// </summary>
  public void start()
  {
   ThreadStart ts = new ThreadStart( this.Process );
   m_thread = new Thread(ts);
   m_thread.Start();
  }

  /// <summary>
  /// The thread number. Used only to identify this thread.
  /// </summary>
  public int Number
  {
   get
   {
    return m_number;
   }

   set
   {
    m_number = value;
   }
  
  }
 }
}

 

 

Done.cs


using System;
using System.Threading;

namespace Spider
{
 /// <summary>
 /// This is a very simple object that
 /// allows the spider to determine when
 /// it is done. This object implements
 /// a simple lock that the spider class
 /// can wait on to determine completion.
 /// Done is defined as the spider having
 /// no more work to complete.
 ///
 /// This spider is copyright 2003 by Jeff Heaton. However, it is
 /// released under a Limited GNU Public License (LGPL). You may
 /// use it freely in your own programs. For the latest version visit
 /// http://www.jeffheaton.com.
 ///
 /// </summary>
 public class Done
 {

  /// <summary>
  /// The number of SpiderWorker object
  /// threads that are currently working
  /// on something.
  /// </summary>
  private int m_activeThreads = 0;

  /// <summary>
  /// This boolean keeps track of if
  /// the very first thread has started
  /// or not. This prevents this object
  /// from falsely reporting that the spider
  /// is done, just because the first thread
  /// has not yet started.
  /// </summary>
  private bool m_started = false;


  
  /// <summary>
  /// This method can be called to block
  /// the current thread until the spider
  /// is done.
  /// </summary>
  public void WaitDone()
  {
   Monitor.Enter(this);
   while ( m_activeThreads>0 )
   {
    Monitor.Wait(this);
   }
   Monitor.Exit(this);
  }

  /// <summary>
  /// Called to wait for the first thread to
  /// start. Once this method returns the
  /// spidering process has begun.
  /// </summary>
  public void WaitBegin()
  {
   Monitor.Enter(this);
   while ( !m_started )
   {
    Monitor.Wait(this);
   }
   Monitor.Exit(this);
  }


  /// <summary>
  /// Called by a SpiderWorker object
  /// to indicate that it has begun
  /// working on a workload.
  /// </summary>
  public void WorkerBegin()
  {
   Monitor.Enter(this);
   m_activeThreads++;
   m_started = true;
   Monitor.Pulse(this);
   Monitor.Exit(this);
  }

  /// <summary>
  /// Called by a SpiderWorker object to
  /// indicate that it has completed a
  /// workload.
  /// </summary>
  public void WorkerEnd()
  {
   Monitor.Enter(this);
   m_activeThreads--;
   Monitor.Pulse(this);
   Monitor.Exit(this);
  }

  /// <summary>
  /// Called to reset this object to
  /// its initial state.
  /// </summary>
  public void Reset()
  {
   Monitor.Enter(this);
   m_activeThreads = 0;
   Monitor.Exit(this);
  }
 }
}

 

ParseHTML.cs


using System;

namespace Spider
{
 /// <summary>
 /// Summary description for ParseHTML.
 ///
 /// This spider is copyright 2003 by Jeff Heaton. However, it is
 /// released under a Limited GNU Public License (LGPL). You may
 /// use it freely in your own programs. For the latest version visit
 /// http://www.jeffheaton.com.
 ///
 /// </summary>

 public class ParseHTML:Parse
 {
  public AttributeList GetTag()
  {
   AttributeList tag = new AttributeList();
   tag.Name = m_tag;

   foreach(Attribute x in List)
   {
    tag.Add((Attribute)x.Clone());
   }

   return tag;
  }

  public String BuildTag()
  {
   String buffer="<";
   buffer+=m_tag;
   int i=0;
   while ( this[i]!=null )
   {// has attributes
    buffer+=" ";
    if ( this[i].Value == null )
    {
     if ( this[i].Delim!=0 )
      buffer+=this[i].Delim;
     buffer+=this[i].Name;
     if ( this[i].Delim!=0 )
      buffer+=this[i].Delim;
    }
    else
    {
     buffer+=this[i].Name;
     if ( this[i].Value!=null )
     {
      buffer+="=";
      if ( this[i].Delim!=0 )
       buffer+=this[i].Delim;
      buffer+=this[i].Value;
      if ( this[i].Delim!=0 )
       buffer+=this[i].Delim;
     }
    }
    i++;
   }
   buffer+=">";
   return buffer;
  }

  protected void ParseTag()
  {
   m_tag="";
   Clear();

   // Is it a comment?
   if ( (GetCurrentChar()=='!') &&
    (GetCurrentChar(1)=='-')&&
    (GetCurrentChar(2)=='-') )
   {
    while ( !Eof() )
    {
     if ( (GetCurrentChar()=='-') &&
      (GetCurrentChar(1)=='-')&&
      (GetCurrentChar(2)=='>') )
      break;
     if ( GetCurrentChar()!='/r' )
      m_tag+=GetCurrentChar();
     Advance();
    }
    m_tag+="--";
    Advance();
    Advance();
    Advance();
    ParseDelim = (char)0;
    return;
   }

   // Find the tag name
   while ( !Eof() )
   {
    if ( IsWhiteSpace(GetCurrentChar()) || (GetCurrentChar()=='>') )
     break;
    m_tag+=GetCurrentChar();
    Advance();
   }

   EatWhiteSpace();

   // Get the attributes
   while ( GetCurrentChar()!='>' )
   {
    ParseName = "";
    ParseValue = "";
    ParseDelim = (char)0;

    ParseAttributeName();

    if ( GetCurrentChar()=='>' )
    {
     AddAttribute();
     break;
    }

    // Get the value(if any)
    ParseAttributeValue();
    AddAttribute();
   }
   Advance();
  }


  public char Parse()
  {
   if( GetCurrentChar()=='<' )
   {
    Advance();

    char ch=char.ToUpper(GetCurrentChar());
    if ( (ch>='A') && (ch<='Z') || (ch=='!') || (ch=='/') )
    {
     ParseTag();
     return (char)0;
    }
    else return(AdvanceCurrentChar());
   }
   else return(AdvanceCurrentChar());
  }
 }
}

Spider.cs


using System;
using System.Collections;
using System.Net;
using System.IO;
using System.Threading;

namespace Spider
{
 /// <summary>
 /// The main class for the spider. This spider can be used with the
 /// SpiderForm form that has been provided. The spider is completely
 /// selfcontained. If you would like to use the spider with your own
 /// application just remove the references to m_spiderForm from this file.
 ///
 /// The files needed for the spider are:
 ///
 /// Attribute.cs - Used by the HTML parser
 /// AttributeList.cs - Used by the HTML parser
 /// DocumentWorker - Used to "thread" the spider
 /// Done.cs - Allows the spider to know when it is done
 /// Parse.cs - Used by the HTML parser
 /// ParseHTML.cs - The HTML parser
 /// Spider.cs - This file
 /// SpiderForm.cs - Demo of how to use the spider
 ///
 /// This spider is copyright 2003 by Jeff Heaton. However, it is
 /// released under a Limited GNU Public License (LGPL). You may
 /// use it freely in your own programs. For the latest version visit
 /// http://www.jeffheaton.com.
 ///
 /// </summary>
 public class Spider
 {
  /// <summary>
  /// The URL's that have already been processed.
  /// </summary>
  private Hashtable m_already;

  /// <summary>
  /// URL's that are waiting to be processed.
  /// </summary>
  private Queue m_workload;

  /// <summary>
  /// The first URL to spider. All other URL's must have the
  /// same hostname as this URL.
  /// </summary>
  private Uri m_base;

  /// <summary>
  /// The directory to save the spider output to.
  /// </summary>
  private string m_outputPath;

  /// <summary>
  /// The form that the spider will report its
  /// progress to.
  /// </summary>
  private SpiderForm m_spiderForm;

  /// <summary>
  /// How many URL's has the spider processed.
  /// </summary>
  private int m_urlCount = 0;

  /// <summary>
  /// When did the spider start working
  /// </summary>
  private long m_startTime = 0;

  /// <summary>
  /// Used to keep track of when the spider might be done.
  /// </summary>
  private Done m_done = new Done();  

  /// <summary>
  /// Used to tell the spider to quit.
  /// </summary>
  private bool m_quit;

  /// <summary>
  /// The status for each URL that was processed.
  /// </summary>
  enum Status { STATUS_FAILED, STATUS_SUCCESS, STATUS_QUEUED };


  /// <summary>
  /// The constructor
  /// </summary>
  public Spider()
  {
   reset();
  }

  /// <summary>
  /// Call to reset from a previous run of the spider
  /// </summary>
  public void reset()
  {
   m_already = new Hashtable();
   m_workload = new Queue();
   m_quit = false;
  }

  /// <summary>
  /// Add the specified URL to the list of URI's to spider.
  /// This is usually only used by the spider, itself, as
  /// new URL's are found.
  /// </summary>
  /// <param name="uri">The URI to add</param>
  public void addURI(Uri uri)
  {
   Monitor.Enter(this);
   if( !m_already.Contains(uri) )
   {
    m_already.Add(uri,Status.STATUS_QUEUED);
    m_workload.Enqueue(uri);
   }
   Monitor.Pulse(this);
   Monitor.Exit(this);
  }

  /// <summary>
  /// The URI that is to be spidered
  /// </summary>
  public Uri BaseURI
  {
   get
   {
    return m_base;
   }

   set
   {
    m_base = value;
   }
  }

  /// <summary>
  /// The local directory to save the spidered files to
  /// </summary>
  public string OutputPath
  {
   get
   {
    return m_outputPath;
   }

   set
   {
    m_outputPath = value;
   }
  }

  /// <summary>
  /// The object that the spider reports its
  /// results to.
  /// </summary>
  public SpiderForm ReportTo
  {
   get
   {
    return m_spiderForm;
   }

   set
   {
    m_spiderForm = value;
   }
  }

  /// <summary>
  /// Set to true to request the spider to quit.
  /// </summary>
  public bool Quit
  {
   get
   {
    return m_quit;
   }

   set
   {
    m_quit = value;
   }
  }

  /// <summary>
  /// Used to determine if the spider is done,
  /// this object is usually only used internally
  /// by the spider.
  /// </summary>
  public Done SpiderDone
  {
   get
   {
    return m_done;
   }

  }

  /// <summary>
  /// Called by the worker threads to obtain a URL to
  /// to process.
  /// </summary>
  /// <returns>The next URL to process.</returns>
  public Uri ObtainWork()
  {
   Monitor.Enter(this);
   while(m_workload.Count<1)
   {
    Monitor.Wait(this);
   }


   Uri next = (Uri)m_workload.Dequeue();
   if(m_spiderForm!=null)
   {
    m_spiderForm.SetLastURL(next.ToString());
    m_spiderForm.SetProcessedCount(""+(m_urlCount++));
    long etime = (System.DateTime.Now.Ticks-m_startTime)/10000000L;
    long urls = (etime==0)?0:m_urlCount/etime;
    m_spiderForm.SetElapsedTime( etime/60 + " minutes (" + urls +" urls/sec)" );
   }

   Monitor.Exit(this);
   return next;
  }

  /// <summary>
  /// Start the spider.
  /// </summary>
  /// <param name="baseURI">The base URI to spider</param>
  /// <param name="threads">The number of threads to use</param>
  public void Start(Uri baseURI,int threads)
  {
   // init the spider
   m_quit = false;

   m_base = baseURI;
   addURI(m_base);
   m_startTime = System.DateTime.Now.Ticks;;
   m_done.Reset();
  
   // startup the threads

   for(int i=1;i<threads;i++)
   {    
    DocumentWorker worker = new DocumentWorker(this);
    worker.Number = i;
    worker.start();
   }

   // now wait to be done

   m_done.WaitBegin();
   m_done.WaitDone();   
  }
 }
}

 

 

發表評論
所有評論
還沒有人評論,想成為第一個評論的人麼? 請在上方評論欄輸入並且點擊發布.
相關文章