Data Mining vs Web Mining

What is the difference between data mining and web mining? Well, one of the significant factor is the structure of the mining data. Common data mining applications discover patterns in a structured data such as database (i.e. DBMS). Web mining, likewise discover patterns in a less structured data such as Internet (WWW). In other words, we can say that Web Mining is Data Mining techniques applied to the WWW.

Types of Web Mining
Basically the web mining is of three types:

1. Web structure mining

This involves the usage of graph theory for analyzing the connections and node structure of the website. According to the type and nature of the data of the web structure, it is again divided into two kinds:

Extraction of patterns from the hyperlinks on the net: The hyperlink is structural form of web address connecting a web page to some other location.
Mining of the structure of the document: The tree like structure gets used for analyzing and describing the XHTML or the HTML tags in the web page.

2.Web Usage mining process
In the web usage mining process, the techniques of data mining are applied so as to discover the trends and the patterns in the browsing nature of the visitors of the website. There is extraction of the navigation patterns as the browsing patterns could be traced and the structure of the website can be designed accordingly. For example, a particular feature of website that is used by the visitors frequently, then you must look forward to enhance and pronounce so as to increase the usage that can appeal more to users of the website. This kind of mining makes use of accesses and logs of the web. Simply by understanding the movement of the guests and the behavior of surfing the net, you can look forward to meet the preferences and the needs in a better manner and popularize your website among the masses in the internet arena.

3.Web Content Mining
Such kind of mining process attempts to discover all links of the hyperlinks in a document so as to generate the structural report on a web page. The information regarding the different facets, for instance, if the users are in a position to find the information, if the structure of the website is too shallow or deep, whether the elements of the web page are correctly placed, the least visited and the most visited website areas and whether they have something to do with page design, etc. Such kinds of things are analyzed and evaluated for deep research.

Continue Reading

Readings on Data Mining for Big Data

Big Data has been an interesting topic in data mining community lately. As in today (17/3/10) there are about 240,000,000 pages for big data (broad search) in Google search. If you are new to big data, see visualization below about big data in wonder wheel to find out what related terms associated with it.

Further readings on Big Data can be found on these posts:
1. What is Big Data?

Big Data is the “modern scale” at which we are defining or data usage challenges. Big Data begins at the point where need to seriously start thinking about the technologies used to drive our information needs. While Big Data as a term seems to refer to volume this isn’t the case. Many existing technologies have little problem physically handling large volumes (TB or PB) of data. Instead the Big Data challenges result out of the combination of volume and our usage demands from that data. And those usage demands are nearly always tied to timeliness.

Big Data is therefore the push to utilize “modern” volumes of data within “modern” timeframes. The exact definitions are of course are relative & constantly changing, however right now this is somewhere along the path towards the end goal. This is of course the ability to handle an unlimited volume of data, processing all requests in real time.

2. Big Data Technologies

Some key points on the big data technologies are summarized in two extended clips:

Big Data Technologies (1:35 minutes)
Key Technology Dimensions (4:52 minutes)
3. Data Mining of Big Data

The Data Mining Renaissance – Hadoop, an open-source implementation of MapReduce.
Algorithms for Massive Data Set Analysis – algorithmic and statistical methods for large-scale data analysis (course)
Method for fast large scale data mining using logistic regression
4. Current and Future Trends of Big Data

The Pathologies of Big Data – discusses the problems and how to deals with big data.
The Future Is Big Data in the Cloud – talks about distributed, non-relational database systems (DNRDBMS) for tackling “Big Data stack”.
Big Data Is Less About Size, And More About Freedom – big data trend is about the democratization of large data.
Data Singularity – another way of handling big data!

Continue Reading

Which Data Mining Algorithm Is Right For You?

The choice of data mining algorithm is not an easy task. According to the “Data Mining Guide“, if you’re just starting out, it’s probably a good idea to experiment with several techniques to give yourself a feel for how they work. Your choice of algorithm will depend upon:

the data you’ve gathered,
the problem you’re trying to solve,
the computing tools you have available to you.
Let’s take a brief look at four of the more popular algorithms.

1. Regression

Regression is the oldest and most well-known statistical technique that the data mining community utilizes. Basically, regression takes a numerical dataset and develops a mathematical formula that fits the data. When you’re ready to use the results to predict future behavior, you simply take your new data, plug it into the developed formula and you’ve got a prediction! The major limitation of this technique is that it only works well with continuous quantitative data (like weight, speed or age). If you’re working with categorical data where order is not significant (like color, name or gender) you’re better off choosing another technique.

2. Classification

Working with categorical data or a mixture of continuous numeric and categorical data? Classification analysis might suit your needs well. This technique is capable of processing a wider variety of data than regression and is growing in popularity. You’ll also find output that is much easier to interpret. Instead of the complicated mathematical formula given by the regression technique you’ll receive a decision tree that requires a series of binary decisions. One popular classification algorithm is the k-means clustering algorithm. Take a look at the Classification Trees chapter from the Electronic Statistics Textbook for in-depth coverage of this technique.

3. Neural Networks

Neural networks have seen an explosion of interest over the last few years, and are being successfully applied across an extraordinary range of problem domains, in areas as diverse as finance, medicine, engineering, geology and physics. Indeed, anywhere that there are problems of prediction, classification or control, neural networks are being introduced. This sweeping success can be attributed to a few key factors:

Power. Neural networks are very sophisticated modeling techniques capable of modeling extremely complex functions. In particular, neural networks are nonlinear (a term which is discussed in more detail later in this section). For many years linear modeling has been the commonly used technique in most modeling domains since linear models have well-known optimization strategies. Where the linear approximation was not valid (which was frequently the case) the models suffered accordingly. Neural networks also keep in check the curse of dimensionality problem that bedevils attempts to model nonlinear functions with large numbers of variables.
Ease of use. Neural networks learn by example. The neural network user gathers representative data, and then invokestraining algorithms to automatically learn the structure of the data. Although the user does need to have some heuristic knowledge of how to select and prepare data, how to select an appropriate neural network, and how to interpret the results, the level of user knowledge needed to successfully apply neural networks is much lower than would be the case using (for example) some more traditional nonlinear statistical methods.
Neural networks are also intuitively appealing, based as they are on a crude low-level model of biological neural systems. In the future, the development of this neurobiological modeling may lead to genuinely intelligent computers.

4. Evolutionary Computation

Evolutionary algorithms employ this powerful design philosophy to find solutions to hard problems. Generally speaking, evolutionary techniques can be viewed either as search methods, or as optimization techniques. Evolutionary algorithm (EA) consists of stochastic search that are based on abstractions of the processes of Darwinian evolution. EA maintains a population of “individuals”, each of them a candidate solution to a given problem. Each individual is evaluated by a fitness function, which measures the quality of its corresponding candidate solution. Individuals evolve towards better and better individuals via a selection procedure based on natural selection (survival of the fittest) and operators based on genetics (crossover and mutation). In essence, the crossover operator swaps genetic material between individuals, whereas the mutation operator changes the value of a “gene” (a small part of the genetic material of an individual) to a new random value. Genetic Algorithms (GA) is the most popular paradigm of Evolutionary algorithms.

For More Information about Data Minining click here

Continue Reading

Top 10 Data Mining Mistakes

Maybe some of you have read this white paper before, but I just want to add here as resource collection for future data mining beginners. The paper is a book excerpts from “Handbook of Statistical Analysis and Data Mining Applications“, Elsevier (ISBN: 978-0-123747655). According to the authors, mining data to extract useful and enduring patterns remains a skill arguably more art than science itself. In the paper, they briefly describe, and illustrate from examples, what they believe are the “Top 10” mistakes of data mining, in terms of frequency and seriousness.

Top 10 DM Mistakes (white paper)

0. Lack of Data (important too!)
1. Focus on Training
2. Rely on One Technique
3. Ask the Wrong Question
4. Listen (Only) to the Data
5. Accept Leaks from the Future
6. Discount Pesky Cases
7. Extrapolate
8. Answer Every Inquiry
9. Sample Casually
10. Believe the Best Model

I would like to emphasize on mistake no. 2 (Rely on 1 technique only) which I think is important for us to consider. In data mining task, it is important that we try variations of modeling algorithms to make sure that we get the best result. Find new algorithms/tools that are available in the market (sometimes it is good to read new publication in conference/journal to get latest improvement of the algorithms) to mine your data. There is a popular folklore “No Free Lunch” (NFL Theorem) that states no algorithm is better to solve all the problems!

Continue Reading