Loading...

Table of Content

      
    16 August 2011
    Volume 41 Issue 4
    Articles
    Locally linear discriminant embedding with nonparametric method
    WANG Xi-zhao,BAI Li-jie*,HUA Qiang, LIU Yu-chao
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  1-6. 
    Abstract ( 370 )   PDF (736KB) ( 1430 )   Save
    Related Articles | Metrics

    Locally linear discriminant embedding (LLDE) can effectively enhance the discriminability of the locally linear embedding (LLE) by adding the criterion of maximum margin criterion (MMC) into the objective function of LLE. However, LLDE seeks to preserve the global discriminative information of the sample and the optimal result only achieved when the data is of Gaussian distribution. A novel supervised dimensionality reduction method, namely nonparametric locally linear discriminant embedding (NLLDE), was proposed by adding the criterion of weighted nonparametric maximum margin criterion (WNMMC) into the objective function of LLE to overcome the drawbacks of LLDE. NLLDE explored the local discriminative information of the data, which had more discriminating power. Furthermore, NLLDE did not assume the particular form of class densities. This made NLDE could be applied in more fields. The experimental results on Yale and PIE face database indicated the effectivity of this method.

    A semi-supervised learning method based on information entropy to extract the domain entity relation
    GUO Jian-yi1,2, LEI Chun-ya1, YU Zheng-tao1,2, SU Lei1,2, ZHAO Jun1, TIAN Wei1
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  7-12. 
    Abstract ( 354 )   PDF (676KB) ( 1886 )   Save
    Related Articles | Metrics

    To solve the limitation by the scale of labeled corpus of the supervised learning method, a semi-supervised method based on information entropy was proposed to extract entity relation using small-scale training data. Firstly, combined with field vocabulary to select small-scale training data, an initial maximum entropy classifier of certain accuracy was constructed to predict some new candidate instances from unlabeled data. Secondly, applied the method of information entropy by setting different entropy value and cycling many times,some new instances of the higher credibility from candidate instances were selected to expand the training data. Finally, the training classifier was re-iteratived with the expanded training data until classifier performance  tended to stable iteration termination, which achieved field entity relation extraction. Experimental results showed that the semi-supervised learning method based on information entropy achieved better learning results compared to other methods.
     

    Digital video forgeries detection based on bidirectional motion vectors
    HUANG Tian-qiang1,2, CHEN Zhi-wen1
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  13-19. 
    Abstract ( 338 )   PDF (1414KB) ( 1384 )   Save
    Related Articles | Metrics

    With the popularity of graphics video editing software, it becomes more and more important to detect digital video forgeries. Based on the continuity of the contents between frames, a digital video forgeries detection method based on bidirectional motion vectors was proposed. By decoding bidirectional prediction video frames (B frames), the bidirectional motion vectors were exacted. Then the average value of the largest difference between each data object in the motion vector sequence and k-neighbors was regarded as the peak point, and by calculating the mean and standard deviation, the threshold was set adaptively to conduct peak detection, and thus the outliers were detected. Experimental results showed that this method could effectively implement the detection of the deletion and insertion tamping of video frame in the moving background.

    A fast affinity propagation clustering algorithm
    LIU Xiao-yong1,2,3, FU Hui2
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  20-23. 
    Abstract ( 461 )   PDF (960KB) ( 4069 )   Save
    Related Articles | Metrics

    An important parameter of affinity propagation algorithm (AP), damping factor, affects the speed of AP. Because the value of damping factor is fixed in traditional AP algorithm, the convergence performance of AP algorithm is sensitive to the parameter’s choosing. A novel and fast AP algorithm, F-AP, was proposed. The new algorithm used the constriction factor to regulate damping factor dynamically. Three datasets and iris dataset were used to compare AP and F-AP. The numerical results showed that F-AP could accelerate the convergence process effectively.

    An attribute reduction algorithm based on partition subset
    ZHAI Jun-hai, GAO Yuan-yuan, WANG Xi-zhao, CHEN Jun-fen
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  24-28. 
    Abstract ( 335 )   PDF (353KB) ( 1174 )   Save
    Related Articles | Metrics

    Based on the degree of significance of the attribute, the attribute reduction algorithm proposed by Pawlak is one of the commonly used algorithms, which measure the degree of significance of the attribute by calculating the granularity of the equivalence relation. However, the computational complexity of this algorithm which calculates the degree of significance of every attribute is very high due to computing the partition of different equivalence relation on whole university. Motivated by the idea of set partition in decision tree methods, an attribute reduction algorithm based on set partition was proposed which could improve the attribute reduction algorithm based on the significance of attributes. The basic idea of the proposed algorithm was to calculate the new partition iteratively by adding a nocore attribute to the core attribute set using the partition induced by the core attribute set. In the framework of keeping the positive region of decision attribute invariant, the attribute set with the most refined partition was an attribute reduct. Theoretical analyses showed that the algorithm could reduce the computational time complexity for calculating the attribute reduction,thereby the efficiency can be improved.

    Video-based fingerprint verification using machine learning
    HE Xue-ying1, 2, QIN Wei1, YIN Yi-long1 *, ZHAO Lian-zheng1,QIAO Hao3
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  29-33. 
    Abstract ( 427 )   PDF (712KB) ( 1193 )   Save
    Related Articles | Metrics

    Fingerprint video was utilized for fingerprint verification. Inside similarity (SI) and outside similarity (SO) were defined and used to calculate the final matching score of two fingerprint videos. A new idea was proposed to acquire the optimized performance of fingerprint verification. A matching result with two features SI and SO was viewed as a sample. The task of verifying whether two fingerprint videos are genuine matching or impostor matching, was converted to classification task of samples with two-dimensional features (SI,SO). In additron, the machine learning algorithms were adopted to classify every matching result. Experimental results showed that the minimum error rates calculated through the method of machine learning algorithms were 0.1704% and 0.1106% while those calculated through the method of using threshold were 0.2229% and 0.1700%. The accuracy of video-based fingerprint verification was significantly improved by using machine learning algorithm compared to the results by using threshold. And the current method avoided the complex process of selecting parameters and thresholds.

    Distance-based outlier detection over uncertain data
    YANG Jin-wei, WANG Li-zhen*, CHEN Hong-mei, ZHAO Li-hong
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  34-37. 
    Abstract ( 279 )   PDF (657KB) ( 2546 )   Save
    Related Articles | Metrics

    Aimed at the problem that the actual data exists uncertainty, a new method of outlier detection was proposed. First, we defined the notions of distance-based outlier detection on uncertain data. Then, we designed an algorithm to mine corresponding outliers over uncertain data. Thirdly, it was designed a pruning algorithm in order to reduce the time complexity. Finally, the experiments studies illustrate that the algorithms have good efficiency in uncertain outlier detection.

    Adaptive pheromone updating ant colony algorithms for solving QoS multicast routing problems
    LI Yong-sheng, QU Liang-dong, LI Xi
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  38-43. 
    Abstract ( 296 )   PDF (764KB) ( 1544 )   Save
    Related Articles | Metrics

    The ant colony algorithm is easy to fall into local optimum and its convergent speed is slow in solving multiple QoS constrained multicast routing problems. Therefore, an adaptive pheromone updating ant colony algorithm was proposed to solve the problems. First, chaos perturbation was used to improve nodes selection strategy and evaporation coefficient was adjusted dynamically according to the intensity of pheromone trail,which improved the global search ability.Second, the pheromone trail on the path was updated adaptively according to the solution in the algorithm. The convergence performance was significantly improved by this algorithm. According to the simulations, under the same experimental conditions,the basic ant algorithm converged to local optimal cost of 87 in 12 iterations, and the multi-behaved ant colony algorithm in combination with quantum-behaved particle swarm optimization converged to local optimal cost of 66 in 7 iterations.The algorithm converged to global optimal cost of 62 in 10 iterations, which showed that the algorithm was better than the two previous algorithms.

    A method of fuzzy integral ensemble classifiers for handling concept-drifting data streams
    JU Chun-hua1,2, CHEN Zhi-qi1*
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  44-48. 
    Abstract ( 265 )   PDF (1105KB) ( 1318 )   Save
    Related Articles | Metrics

    A new classification algorithm FI-MDS based on fuzzy integral fusion was proposed, which aimed at mining data steams with concept drifts and noise and  combined fuzzy integral fusion and ensemble multi-classifiers technology. First, the decision-making profile could  be obtained by training samples through base classifiers, and then  the final classification result could be obtained via fuzzy integral fusion. Meanwhile, a dynamic weight update was  also introduced to improve the adaptability of this algorithm. The experiment results indicated that this method could  enhance the detection accuracy of the concept drifts. Complex classification problems in data streams could  be solved and the algorithm has higher classification performance, effectiveness and robustness.

    Concurrent frequent itemsets mining algorithm based on dynamic prune of FP-tree
    SONG Wei, LIU Wen-bo, LI Jin-hong
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  49-55. 
    Abstract ( 376 )   PDF (1442KB) ( 1292 )   Save
    Related Articles | Metrics

    To solve the problem of huge memory usage of FP-tree construction and traversal in FP-growth, the dynamic-prune algorithm, a concurrent frequent itemsets mining algorithm based on dynamic pruning FP-tree, was proposed. First, by recording the support counts of frequent items during the process of FP-tree construction, the dynamic pruning algorithm of FP-tree was implemented. Sencond, the construction of FP-tree and the discovery of frequent itemsets could be realized simultaneously by using the concurrency strategy. Compared with FP-growth algorithm, it was not necessary to mine frequent itemsets after the construction of FP-tree in dynamic-prune algorithm, and the memory cost reduced. Experimental results showed that the dynamic-prune algorithm outperformed the FP-growth algorithm both in efficiency and scalability.

    Fast computation of characteristic boundary points for improving geometric ensembles
    LI Yu-jian, MENG Dong-xia*, GUI Zhi-ming
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  56-60. 
    Abstract ( 428 )   PDF (757KB) ( 1266 )   Save
    Related Articles | Metrics

    In order to solve the low efficiency of optimized geometric ensembles(OGE) caused by a large number of redundant computations in constructing the set of characteristic boundary points, two improved geometric ensembles——Gabriel OGE and heuristics OGE were proposed respectively by applying Gabriel neighboring rule and its heuristics, which could accelerate the computation of characteristic boundary points compared with OGE in experiments. The results showed that although Gabriel OGE had the same time complexity with OGE in computing characteristic boundary points, it became much faster for reducing a number of redundant algorithm computations. Heuristics OGE could not only decreases the average time complexity to O(dM2), but also have the most efficiency when dealing with a large dataset. Gabriel OGE and heuristics OGE could effectively increase the computing speed and greatly reduce the computing time when having the same classification results with OGE.

    Film affective classification based on fuzzy theory and syllogism inference
    LIN Xin-qi, YAN Xiao-ming, ZHENG Zhi
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  61-67. 
    Abstract ( 313 )   PDF (1783KB) ( 1699 )   Save
    Related Articles | Metrics

     Due to the fuzzy nature of the human emotional reaction, it is difficult to improve the accuracy only using the low-level features. Based on the relationships between the affect and low-level features obtained by the previous workers, the fuzzy membership functions were introduced, and the low-level features were processed by fuzzy principle of the maximum membership degree. Then a fuzzy feature vector was obtained, which could imply the affective information of the given video clip, and could be used to recognize the affective type by syllogism inference. The experimental results showed that the fuzzy feature vector could shorten the gap between the affects and low-level contents. The classification accuracies of three affects all exceeded 84%. Compared with the existing methods, the total average classification accuracy increased 9.33%, which proved that the proposed algorithm could effectively improve the accuracy of the film affective classification.

    Study on Agent based simulation of banking queuing system
    ZHANG Qi-cong1, YANG Gong-ping2*
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  68-72. 
    Abstract ( 431 )   PDF (1110KB) ( 1475 )   Save
    Related Articles | Metrics

    In order to reduce the waiting time of customers in the current banking system, an Agent-based simulation model of banking queuing system was developed. The customer, queue machine, server and automatic teller machine were abstracted as different Agents. The situation of system was simulated through the interaction among Agents. A proposed algorithm for adjusting queues dynamically was implemented in the netlogo platform. The experiment data and analysis showed that the model could simulate the real operation of banking queuing system, rationally use the existing resources, reduce the average waiting time of customer and enhance customer satisfaction.

    Ensemble learning of multi-classifier for early classification of time series
    LI Xiao-bin1, LI Shi-yin2
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  73-78. 
    Abstract ( 298 )   PDF (826KB) ( 2073 )   Save
    Related Articles | Metrics

     To solve the early classification for time series in some time-sensitive application area, an ensemble learning method named sequential subspace stacked generation (SSSG) was introduced. This method split time series into several sequential subspaces with slider windows. Multi first-layer classifiers were used on these sequential subspaces and label probability for these subspaces was generated. Then these probability results were input for the second layer classifier. The time series’ label could be predicated by the two-layer classifier. Experiment results showed that this method could both do early classification for time series and achieve higher classification accuracy than only one classifier.

    Algorithm based on communication system for constructing decision tree
    ZHANG Xiao-feng, ZHANG Zhi-wang, PANG Shan
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  79-84. 
    Abstract ( 304 )   PDF (602KB) ( 1219 )   Save
    Related Articles | Metrics

    Attribute selection criterion in constructing decision trees is always the focus in the area of data mining. Based on the analysis of ID3 and C4.5, this paper proposes two algorithms based on average self-information and average mutual-information in communication system. In this research, we prove that the two proposed algorithms are equivalent to ID3 and C4.5. That is, information gain is equivalent to average mutual information in communication system, and information gain ratio is the same as the ratio of average mutual information to the entropy. Experiments on AllElectrionics illustrate that compared with information gain and information gain ratio, attribute selection criteria proposed in this paper are easy to compute and understand.

    Enhanced visual-based density clustering algorithm
    JIANG Sheng-yi1, LUO Fang-lun1, YU Wen2
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  85-90. 
    Abstract ( 280 )   PDF (807KB) ( 1216 )   Save
    Related Articles | Metrics

     Visual-based density clustering algorithm is insensitive to the initialized parameters, identify the data with any shape and can find the optimal cluster. One-pass clustering algorithm is efficient and fast. Based on their features we do research on a clustering algorithm which can process the data with mixing attributes. At first, the visual-based density clustering algorithm was improved slightly, which enabled it to process the data with categorical attributes. Then, the two-stage clustering algorithm was put forward. In the first stage, single pass clustering algorithm was used to group the data as an original partition. In the second stage, improved visual-based density clustering algorithm was used to merge the original partition so that the clusters are finally obtained. The experimental results of both the actual and synthetic datasets show that the presented clustering algorithm is effective and practicable.

    An under-sampling approach based on AdaBoost for ensembled classification
    SUN Xiao-yan1,2, ZHANG Hua-xiang1,2*, JI Hua1,2
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  91-94. 
    Abstract ( 372 )   PDF (332KB) ( 2193 )   Save
    Related Articles | Metrics

     Under-sampling was easy to ignore some useful information of the majority class in imbalanced data sets classification. So, we propose an AdaBoost-based under-sampling ensemble approach U-Ensemble to solve this problem. Firstly, AdaBoost was used to process the imbalanced data sets in order to get the weights of samples. Then, we used Bagging as the classifier, bootstrap was no longer used when sampled the majority class, but we randomly selected some of samples that had larger and smaller weights.Meanwhile, we ensured that the number of the samples selected from the majority class were equal to the number of the minority class. At last, we combined the sampled majority class samples and all the minority class samples as the training data set for a component classifier. Experimental results showed the effectiveness of U-Ensemble.

    An auto-focus algorithm for the tongue image acquisition based on image processing
    WEI Yu-ke1, LI Jiang-ping2, DUAN Yang-guang1, LU Bo-sheng1
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  95-100. 
    Abstract ( 440 )   PDF (1093KB) ( 1508 )   Save
    Related Articles | Metrics

    To rapidly and accurately acquire a clear tongue image needs adjusting and controlling of the distance between camera and the tongue. This paper elaborated the factors that influence image quality such as clarity,light reflection scale,shade scale,illumination condition and focal length and the focusing method used in adjusting the position of lens and subject. Combined with the focus technology, this paper established a mathematical model of the relation of image quality and distance of lens and subject. The procedure of auto-focus was the acquisition of the maximum value of the focusing evaluation function employing the mountainclimb searching algorithm. Unlike the traditional mountain-climb searching algorithm which was apt to disturbed by the local extremum, introduced an algorithm that preprocessing each frame of image firstly and using the two step method of rough before precision, which effectively get rid of this disturbance. It has realized by using VC++6.0 program to automatically adjust the acquisition machine to acquire the best quality picture. Experiment results showed that this method was fast, precision, strongly anti-interference and other good characteristics.

    Kernel principal components analysis based super resolution method
    YAN Zi-ye, LU Yao, LI Jian-wu, MA Yue
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  101-105. 
    Abstract ( 315 )   PDF (708KB) ( 1594 )   Save
    Related Articles | Metrics

    The match between the observed example and the training example set is one of the crucial problem in learning based super resolution. The proposed method can make the match more accurate by mapping the observation example of low resolution to the reproducing kernel Hilbert space, avoiding the wrong match in the learning based super resolution and improving the image guality. The algorithm is that first to apply KPCA to training examples to form a subspace, and then  project the observed example onto the subspace. The pre-images in input space are obtained using distance constraint algorithm. Finally, the high resolution image is obtained via the recombination of the produced image patches, Experimental results on USPS data set show this method is effective.

    Path planning based on semantic information in virtual environment
    CHEN Ming-zhi1, XU Chun-yao2, CHEN Jian2, YU Lun2
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  106-112. 
    Abstract ( 343 )   PDF (1440KB) ( 1791 )   Save
    Related Articles | Metrics

    To reduce the computational complexity of path planning in virtual environment, reflect rationality of path and enhance broad adaptability of path planning algorithm, the layered path planning algorithm based on semantic information would be proposed in this paper. Considering the current modeling for virtual humans and environments are usually only exploiting geometric information, we would present a new modeling method into which the semantic information was integrated, in addition the semantic restriction was added into path searching algorithm to make the planned path more accord with human behavior habit. Finally, the effectiveness of layered path planning algorithm based on semantic information was verified from three sides of path length, computing time and expanding node number. The experimental result showed that the computing time of our algorithm appeared linear growth trend along with increasing of the scenes sizes.

    Finite ridgelet transform—based edge detection algorithm
    ZHU Rui-ling1, WANG Xin2, HAN Guo-dong3
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  113-118. 
    Abstract ( 321 )   PDF (1354KB) ( 1124 )   Save
    Related Articles | Metrics

     Laplace operator is apt to be affected by noise. An edge detection algorithm finite ridgelet transform (FRIT)-based was proposed in order to improve it. Based on Laplace probability distribution function (PDF), the finite ridgelet-based transform coefficients were analyzed, and the maximum a posteriori (MAP) estimation for FRIT coefficients was developed. According to the model for some edge detectors, an optimal threshold for edge detection was obtained, and edge image was extracted. Both avoiding the effect of noise and preserving edge image were achieved. Experiments results showed that the proposed edge detector achieved better performances both on the localization and avoiding the effect of noise, edge image could be better distinguished by the proposed method.

    Wavelet-neural network model based complex hydrological time series prediction
    ZHU Yue-long, LI Shi-jin, FAN Qing-song, WAN Ding-sheng
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  119-124. 
    Abstract ( 325 )   PDF (559KB) ( 1364 )   Save
    Related Articles | Metrics

    Time series prediction is one of the main research topics in time series analysis, which is of great importance both in theoretical and application aspects. To improve the performance of the wavelet-neural network model on complex time series, a novel multi-factor prediction model is proposed. According to the adaptability of different wavelet function to hydrological time series, a new criterion for the selection of different wavelet functions is also put forward, which is based on weighted correlation coefficients. Lastly, the newly proposed method has been tested on predicting the daily flow of WANGJIABA station, which is a very important observation site on HUAIHE river. It is found that the chosen Haar wavelet and B3 spline wavelet can produce higher prediction accuracy, which validates the effectiveness of the selecting principle of wavelet function. By comparing with traditional wavelet neural network for single time series, at least 10% improvement has been observed for different predicting periods, and 15% improvement in forecasting the high flow direction during the disastrous flood period. All the experimental results have shown that the proposed multi-factor prediction model is effective for complex hydrological time series prediction.

    Improved approximation Algorithm for the k-means Clustering Problem
    WANG Shou-qiang1, ZHU Da-ming2, SHI Shi-ying1
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  125-132. 
    Abstract ( 339 )   PDF (405KB) ( 1481 )   Save
    Related Articles | Metrics
    Rapid feedback analysis method for underground caverns during constructing
    WANG Gang1, 2, JIANG Yu-jing2, LI Shucai3
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  133-136. 
    Abstract ( 408 )   PDF (409KB) ( 1097 )   Save
    Related Articles | Metrics

     It is impossible to obtain complete field data in the underground engineering; for example, information on stresses, properties and etc. can only be partially known. The back-analysis is oriented toward makes relatively little site-specific data to give the most helps to geotechnical engineering. The key factors affecting the back-analysis results were detailedly investigated, especially the authenticity of input data, the adaptability of computation model and the reliability of the computation result. The orthogonal experimental method was used to performing a successful devise for numerical experiments. The layered range optimizing method was established to assemble uncertain problem-specific parameters. Considering the geological characteristics of underground engineering, the measurement data and the interval analysis method, the rapid feedback method for huge underground houses was established during constructing. The spectrum of back-analysis was also brought forward. Analyzing the deformation law of underground engineering structures in process of construction, the sufficient feedback experiments can produce relevant rock parameters, which help to optimizing subsequent engineering construction process. As enough data of a high quality are available, the feedback can give good predictions.

    An algorithm based on Bayesian network for web page recommendation
    WANG Ai-guo, LI Lian*, YANG Jing, CHEN Gui-lin
    JOURNAL OF SHANDONG UNIVERSITY (ENGINEERING SCIENCE). 2011, 41(4):  137-142. 
    Abstract ( 284 )   PDF (787KB) ( 2007 )   Save
    Related Articles | Metrics

    A model based on Bayesian network and corresponding algorithm for web page recommendation were presented to improve users’ behavior on browsing web pages and enhance visiting efficiency. The model was constructed by collecting and analyzing the description files and log files in the servers and using the Bayesian network to analyze the dependence among the web pages. Then the model was built and the recommendation set was generated. By conducting experiments on the network log data sets provided by Microsoft Company, the obtainea precision and coverage were both higher than 80%. The results of theoretical analysis and experiments indicated that the algorithm could make personalized recommendation for users in real time online. Compared with other existing algorithms, this algorithm could give the recommendation set more quickly with higher precision and coverage.