Monthly Archives: February 2016

What Is The Point To Note About MOLAP?

What Is The Point To Note About MOLAP?

Multidimensional OLAP (MOLAP) uses array-based multidimensional storage space engines for multidimensional views of information. With multidimensional information stores, the storage space usage may be low if the information set is rare. Therefore, many MOLAP servers use two levels of information storage space reflection to manage heavy and rare data-sets.

Points to Remember:

MOLAP resources process information with consistent response time regardless of level of summarizing or calculations selected.

MOLAP resources need to avoid many of the complexities of creating a relational data source to store information for analysis.

MOLAP resources need quickest possible performance.

MOLAP server adopts two level of storage space reflection to manage heavy and rare information sets.

Denser sub-cubes are identified and stored as range structure.

Sparse sub-cubes employ compression technology.

MOLAP Architecture

MOLAP includes the following components:

Database server.

MOLAP server.

Front-end tool.

Advantages

MOLAP allows quickest indexing to the pre-computed summarized information.

Helps customers connected to a network who need to analyze larger, less-defined information.

Easier to use, therefore MOLAP is suitable for in experienced customers.

Disadvantages

MOLAP are not capable of containing detailed information.

The storage space usage may below if the information set is rare.

MOLAP (multidimensional on the internet systematic processing) is on the internet systematic handling (OLAP) that indices straight into a multidimensional data source. In common, an OLAP program snacks information multidimensionally; the customer is able to perspective different factors or elements of information aggregates such as product sales by time, location, and item design. If the information is saved in a relational data source, it can be seen multidimensionally, but only by successively obtaining and handling a desk for each sizing or part of a knowledge total. MOLAP procedures information that is already saved in a multidimensonal range in which all possible mixtures of information are shown, each in a mobile that can be utilized straight. For this reason, MOLAP is, for most uses, quicker and more user-responsive than relational on the internet systematic handling (ROLAP), the primary substitute to MOLAP. There is also multiple OLAP (HOLAP), which brings together some functions from both ROLAP and MOLAP. MOLAP varies considerably in that (in some software) it needs the pre-computation and space for storage of information in the dice — the function known as handling. Most MOLAP alternatives store these information in an enhanced multidimensional range space for storage, rather than in a relational data source (i.e. in ROLAP).

There are many strategies and methods for effective information space for storage, gathering or amassing and execution specific business reasoning with a MOLAP Remedy. As a result there are many misunderstandings to what the phrase particularly indicates. Our oracle dba jobs is always there for you to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

5 Points So Special about ROLAP?

5 Points So Special about ROLAP?

Relational ROLAP web servers are placed between relational back-end web server and customer front-end resources. To keep and handle the factory details, the relational OLAP uses relational or extended-relational DBMS.

ROLAP

ROLAP contains the following:

Execution of gathering or amassing routing logic

Marketing for each DBMS back-end

Extra resources and services

Points to Remember

ROLAP web servers are extremely scalable.

ROLAP resources evaluate large amounts of data across several measurements.

ROLAP resources shop and evaluate extremely unpredictable and adjustable details.

Relational OLAP Architecture

ROLAP contains the following components:

Databases server

ROLAP server

Front-end device.

Advantages

ROLAP web servers can be easily used with current RDBMS.

Data can be saved effectively, since no zero information can be saved.

ROLAP resources do not use pre-calculated details pieces.

DSS web server of micro-strategy assumes the ROLAP technique.

Disadvantages

Inadequate question efficiency.

Some restrictions of scalability based on the technological innovation structure that is used.

ROLAP (relational online systematic processing) is an alternative to the MOLAP (Multidimensional OLAP) technological innovation. While both ROLAP and MOLAP analytic resources are made to allow research of data through the use of a multidimensional details design, ROLAP varies considerably in that it does not require the pre-computation and storage space of data. Instead, ROLAP resources access the details in a relational database and produce SQL concerns to determine details at the appropriate level when an end user demands it. With ROLAP, it is possible to create additional database platforms (summary platforms or aggregations) which review the details at any preferred mixture of measurements.

While ROLAP uses a relational database source, generally the database must be properly developed for ROLAP use. A database which was made for OLTP will not operate well as a ROLAP database. Therefore, ROLAP still includes creating an extra duplicate of the details. However, since it is a database, a variety of technological innovation can be used to fill the database.

Benefits of ROLAP

ROLAP is regarded to be more scalable in managing large information amounts, especially designs with measurements with very high cardinality (i.e., an incredible number of members).

With a wide range of information running resources available, and to be able to fine-tune the ETL rule to the particular information design, running time are generally much smaller than with the computerized MOLAP plenty.

The information are saved in a conventional relational data source and can be utilized by any SQL confirming device (the device does not have to be an OLAP tool).

ROLAP resources are better at managing non-aggregatable information (e.g., textual descriptions). MOLAP resources seem in order to slowly efficiency when querying these components. Our oracle dba course is more than enough for you to make your profession in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

What Is Datawarehousing And System Managers?

What Is Datawarehousing And System Managers?

Program management is mandatory for the effective performance of a information factory. The most significant system supervisors are:

Program settings manager

Program arranging manager

Program event manager

Program database manager

Program back-up recovery manager

System Configuration Manager

It settings administrator is mainly accountable for the management over the installation and settings of information factory.

The dwelling of settings administrator is different from one os to another.

In Unix framework of settings, the administrator is different from source to source.

Configuration supervisors have single customer interface.

The consumer interface of settings administrator allows us to management all aspects of the system.

Note: The most significant settings tool is the I/O administrator.
9-important-topics-to-note-in data-mining

System Scheduling Manager

System Scheduling Manager is mainly accountable for the effective performance of the information factory. Its purpose is to schedule ad hoc concerns. Every os has its own scheduler with some form of group management mechanism. The record of features a process arranging administrator must have is as follows:

Work across group or MPP boundaries

Cope with international time differences

Manage job failure

Manage multiple queries

Assistance job priorities

Reboot or re-queue the failed jobs

Inform the customer or a process when job is completed

Maintain the job plans across system outages

Re-queue tasks to other queues

Include the stopping and starting of queues

Log Queued jobs

Cope with inter-queue processing

Note: The above record can be used as assessment factors for the assessment of a good scheduler.

Some essential tasks that a scheduler must be capable of handling are as follows:

Daily and ad hoc query scheduling

Execution of regular report requirements

Data load

Data processing

Index creation

Backup

Gathering or amassing creation

Data transformation

Note: If the information factory is operating on a group or MPP framework, then the system arranging administrator must be capable of operating across the framework.

Program Occasion Manager

The event administrator is a kind of a software. The case administrator controls the activities that are described on the information factory system. We cannot manage the information factory personally because the structure of information factory is very complicated. Therefore we need a tool that instantly controls all the activities without any involvement of the consumer.

Note: The Occasion administrator watches the activities situations and deals with them. The case administrator also paths the variety of things that can go wrong on this complicated information factory system.

Events

Events are the activities that are produced by the consumer or it itself. It may be observed that the event is a considerable, visible, incident of a detailed action.

Given below is a list of common activities that are required to be monitored.

Components failure

Running out of area on certain key disks

A procedure dying

A procedure coming back an error

CPU utilization going above an 805 threshold

Internal argument on data source serialization points

Shield storage cache hit percentages going above or unable below threshold

A desk attaining to maximum of its size

Extreme storage swapping

A desk unable to increase due to lack of space

Hard drive presenting I/O bottlenecks

Use of short-term or sort area attaining a certain thresholds

Any other data source distributed storage usage

Our oracle dba training is always there for you to provide a wonderful career for you in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

4 Data Warehousing Delivery Processes

4 Data Warehousing Delivery Processes

A information factory is never static; it advances as the company increases. As the company advances, its specifications keep changing and therefore a information factory must be designed to ride with these changes. Hence a information factory system needs to be flexible.

Ideally there should be a distribution way to provide a information factory. However information factory projects normally have problems with various issues that make it difficult to complete projects and deliverables in the tight and requested fashion required by the fountain technique. Most of the times, the needs are not recognized completely. The architectures, designs, and develop components can be completed only after gathering and studying all the needs.

Delivery Method

The distribution technique is a version of the joint database integration technique implemented for the distribution of a information factory. We have held the information factory distribution way to reduce risks. The technique that we will discuss here does not limit the overall distribution time-scales but guarantees the company advantages are delivered gradually through the growth procedure.

Note: The distribution procedure is broken into stages to limit the work and distribution risk.

The following plan explains the stages in the distribution process:

IT Strategy

Data factory are ideal investment strategies that require a company way to generate advantages. IT Method required to obtain and maintain funding for the work.

Business Case

The purpose of company situation is to calculate company advantages that should be based on using a information factory. These advantages may not be measurable but the estimated advantages need to be clearly stated. If a information factory does not have a clear company situation, then the company tends to have problems with reliability problems at some stage during the distribution procedure. Therefore in information factory projects, we need to comprehend the company situation for investment.

Education and Prototyping

Organizations experience the idea of information analysis and educate themselves on the value of having a information factory before deciding for a solution. This is resolved by prototyping. It helps in understanding the practicality and advantages of a information factory. The prototyping action on a small-scale can promote educational procedure as long as:

The model details a detailed technological purpose.

The model can be dumped after the practicality idea has been shown.

The game details a little part of ultimate information content of the information factory.

The game timescale is non-critical.

The following points are to be kept in mind to produce an early release and provide company advantages.

Identify the structure that is capable of changing.

Focus on company specifications and technological strategy stages.

Limit the opportunity of the first develop stage to the minimum that provides company advantages.

View the short-term and medium-term specifications of the information factory. Our oracle training is more than enough for you to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

9 Important Topics To Note In Data Mining

9 Important Topics To Note In Data Mining

Data exploration is defined as extracting the details from a huge set of data. In other words we can say that data exploration is exploration the details from data. These details can be used for any of the following applications −

Market Analysis

Fraud Detection

Customer Retention

Production Control

Science Exploration

Data Mining Engine
9-important-topics-to-note-in data-mining

Data exploration motor is very essential to the details exploration program. It consists of a set of functional modules that perform the following functions −

Characterization

Association and Correlation Analysis

Classification

Prediction

Team analysis

Outlier analysis

Evolution analysis

Knowledge Base

This is the domain information. These details is used to guide the search or assess the interestingness of the resulting styles.

Knowledge Discovery

Some people treat data exploration same as information finding, while others view data exploration as an essential step at the same time expertise finding. Here is the list of steps involved in the details finding procedure −

Details Cleaning

Details Integration

Details Selection

Details Transformation

Details Mining

Pattern Evaluation

Knowledge Presentation

User interface

User customer interface is the module of data exploration program that helps the communication between users and the details exploration program. User Interface allows the following functionalities −

Interact with the program by specifying an understanding exploration query process.

Providing information to help focus the search.

Mining based on the intermediate data exploration results.

Browse data source information factory schemas or data structures.

Evaluate mined styles.

Visualize the styles in different forms.

Data Integration

Data Incorporation is an understanding preprocessing strategy that merges the details from multiple heterogeneous data sources into a coherent data store. Details integration may involve inconsistent data and therefore needs data washing.

Data Cleaning

Data washing is a strategy that is applied to remove the noisy data and appropriate the inconsistencies in data. Details washing involves transformations to appropriate the wrong data. Details washing is conducted as an understanding preprocessing step while preparing the details for an understanding factory.

Data Selection

Data Choice is the procedure where data relevant to the research process are retrieved from the data source. Sometimes data modification and consolidation are conducted before the details procedure.

Clusters

Cluster represents a number of similar kind of things. Team research represents forming number of things that are just like each other but are highly different from the things in other groups. You can join our oracle dba jobs to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

What Is Data Mining And World Wide Web

What Is Data Mining And World Wide Web

The World Wide Web contains loads of details that provides an excellent source for information exploration.

Challenges in Web Mining

The web presents great difficulties for source and knowledge finding depending on the following findings −

The web is too large − The size of the web is very large and quickly improving. This seems that the web is too large for information warehousing and information exploration.

Complexness of Web webpages − The web webpages do not have unifying framework. They are very complicated as compared to conventional written text papers. There are large amount of records in digital collection of web. These collections are not organized according to any particular categorized order.

Web is powerful details source − The details on the web is quickly modified. The information such as news, stock marketplaces, climate, sports, shopping, etc., are consistently modified.

Variety of customer areas − The customer group on the web is quickly growing. These customers have different background scenes, passions, and utilization reasons. There are more than 100 thousand work stations that are linked with the Internet and still quickly improving.

Relevance of Information − It is considered that a particular person is generally interested in only small section of the web, while the rest of the section of the web contains the details that is not based on the customer and may swamp preferred results.

Mining Web website structure structure

The basic framework of the site is depending on the Document Item Design (DOM). The DOM framework represents a shrub like framework where the HTML tag in the website matches to a node in the DOM shrub. We can section the site by using predetermined labels in HTML. The HTML format is versatile therefore, the web webpages does not follow the W3C requirements. Not following the requirements of W3C may cause mistake in DOM shrub framework.

The DOM framework was originally presented for demonstration in the internet browser and not for information of semantic framework of the site. The DOM framework cannot properly get the semantic connection between the different parts of a web website.

Vision-based website segmentation (VIPS)

The purpose of VIPS is to draw out the semantic framework of a web website depending on its visible demonstration.

Such a semantic framework matches to a shrub framework. In this shrub each node matches to a prevent.

A value is sent to each node. This value is called the Degree of Coherence. This value is sent to indicate the consistent content in the prevent depending on visible understanding.

The VIPS criteria first ingredients all the appropriate prevents from the HTML DOM shrub. After that it discovers the separators between these prevents.

The separators relate to the straight or lines of horizontally type in a web website that creatively combination with no prevents.

The semantics of the site is designed on the basis of these prevents. You can join our oracle Course to make your career in this field

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

What Is Data Mining Query Language?

What Is Data Mining Query Language?

The Data Mining Query Language (DMQL) was suggested by Han, Fu, Wang, et al. for the DBMiner data mining program. The Information Exploration Question Terminology is actually in accordance with the Structure Query Language (SQL).

Data Exploration Question ‘languages’ can be meant to back up ad hoc and entertaining data mining. This DMQL provides instructions for specifying primitives. The DMQL can perform with data source information manufacturing facilities as well. DMQL can be used to determine data mining projects. Particularly we analyze how to determine data manufacturing facilities information marts in DMQL.

Syntax for Task-Relevant Information Specification

Here is the format of DMQL for specifying task-relevant data −

use data source database_name

or

use data factory data_warehouse_name

in importance to att_or_dim_list

from relation(s)/cube(s) [where condition]

order by order_list  group by grouping_list

Syntax for Specifying the Type of Knowledge

Here we will talk about the format for Depiction, Elegance, Organization, Category, and Forecast.

Characterization

The format for characterization is −

mine features [as pattern_name]

evaluate {measure(s) }

The evaluate stipulation, identifies total actions, such as depend, sum, or count%. For example −

Information explaining client buying routines.

my own features as customerPurchasing

evaluate count%

Discrimination

The format for Elegance is −  mine evaluation [as {pattern_name]}

For {target_class } where {t arget_condition }

{versus {contrast_class_i }

where {contrast_condition_i}}

analyze {measure(s) }

For example, a person may determine big spenders as clients who buy things that price $100 or more on an average; and price range spenders as clients who buy products at less than $100 on a normal. The mining of discriminant explanations for purchasers from each of these groups can be specified in the DMQL as −

mine evaluation as purchaseGroups

for bigSpenders where avg(I.price) ≥$100

versus budgetSpenders where avg(I.price)< $100

analyze count

Association

The format for Organization is−

mine organizations [ as {pattern_name} ]

{matching {metapattern} }

For Example −  mine organizations as buyingHabits

matching P(X:customer,W) ^ Q(X,Y) ≥ buys(X,Z)

where X is key of client relation; P and Q are predicate variables; and W, Y, and Z are item factors.

Classification

The format for Category is − mine classification [as pattern_name]

analyze classifying_attribute_or_dimension

For example, to my own styles, identifying client credit rating score where the is identified by the feature credit_rating, and my own classification is identified as classify Customer Credit Rating. Our DBA training course is always there for you to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

How To Mine Text Data From Database?

How To Mine Text Data From Database?

Written text data source involve huge selection of records. They gather these details from several sources such as news articles, books, digital collections, e-mail messages, web pages, etc. Due to increase in the amount of information, the writing data source are expanding as well. In many of the writing data source, the information is semi-structured.

Text_Data_From_Database

For example, a papers may contain a few organized areas, such as headline, author, publishing_date, etc. But along with the structure information, the papers also contains unstructured text elements, such as subjective and material. Without knowing what could be in the records, it is difficult to come up with effective concerns for examining and getting valuable details from the information. Users require tools to compare the records and position their importance and importance. Therefore, text exploration has become popular and an essential theme in information exploration.

Information Retrieval

Information recovery deals with the recovery of information from many of text-based records. Some of the data source techniques are not usually present in details recovery techniques because both handle different kinds of information. Examples of information recovery program include −

On the internet Collection catalog system

On the internet Document Management Systems

Web Look for Systems etc.

Note − The problem in a knowledge recovery product is to locate appropriate records in a papers selection centered on a customer’s question. This type of customer’s question includes some search phrases explaining a knowledge need.

In such search problems, the consumer takes an effort to pull appropriate details out from a selection. This is appropriate when the consumer has ad-hoc details need, i.e., a short-term need. But if the consumer has a long-term details need, then the recovery program can also take an effort to push any newly came details item to the consumer.

This type of access to details are called Information Filtration. And the corresponding techniques are known as Filtration Systems or Recommender Systems.

Basic Measures for Written text Retrieval

We need to check the precision of a process when it retrieves a variety of records on the basis of customer’s feedback. Let the set of records centered on a question be denoted as {Relevant} and the set of recovered papers as {Retrieved}. The set of records that are appropriate and recovered can be denoted as {Relevant} ∩ {Retrieved}. This can be shown in the form of a Venn plan as follows −

You can join our DBA Course to know more about the latest concepts in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

What Is Datamining Cluster Analysis?

What Is Datamining Cluster Analysis?

Team is a variety of things that is associated with the same class. In other words, identical things are arranged in one cluster and different things are arranged in another cluster.

What is Clustering?

Clustering is the procedure for making a variety of subjective things into sessions of identical things.
datamining-cluster-analysis

Points to Remember

A cluster of information things may perhaps be treatable as one group.

While doing cluster research, we first partition the set of information into groups centered on information likeness and then allocate the brands to the.

The benefit of clustering over category is that, it is convenient to changes and allows single out useful features that differentiate different groups.

Applications of Team Analysis

Clustering research is generally used in many programs such as researching the market, design identification, information research, and image handling.

Clustering can also help promoters discover unique groups in their client base. And they can define their client groups centered on the purchasing styles.

In the field of chemistry, it can be used to obtain plant and animal taxonomies, classify genetics with the exact same features and obtain understanding of components natural to communities.

Clustering also allows in identification of areas of identical land use in an earth statement data source. It also allows in excellent of multiple houses in a city according to house type, value, and geographical location.

Clustering also allows in identifying records on the web for information finding.

Clustering is also used in outlier identification programs such as identification of bank card scams.

As a information exploration operate, cluster research provides as a tool to obtain understanding of the submission of information to observe features of each cluster.

Requirements of Clustering in Data Mining

The following factors throw light on why clustering is required in information exploration −

Scalability − We need highly scalable clustering methods to cope with large data source.

Capability to cope with different kinds of features − Algorithms should can easily be put on any kind of information such as interval-based (numerical) information, particular, and binary information.

Discovery of groups with feature type − The clustering criteria should have the ability to discovering multiple irrelavent type. They should not be surrounded to only distance actions that tend to discover rounded cluster of smaller portions.

Great dimensionality − The clustering criteria should not only be able to handle low-dimensional information but also the top perspective area.

Capability to cope with loud information − Databases contain loud, losing or invalid information. Some methods are understanding of such information and may lead to poor groups.

Interpretability − The clustering outcomes should be interpretable, understandable, and useful.

Clustering Methods

Clustering techniques can be categorized into the following groups −

Dividing Method

Ordered Method

Density-based Method

Grid-Based Method

Model-Based Method

Constraint-based Method

Partitioning Method

Suppose we are given a data source of ‘n’ things and the partitioning technique constructs ‘k’ partition of information. Each partition will signify a cluster and k ≤ n. It means that it will classify the information into k groups, which fulfill the following specifications −

Each group contains at least one item.

Each item must are supposed to be to exactly one group.

Our DBA training institute is always there for you to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr

9 Classifications Based On Databases Mined?

9 Classifications Based On Databases Mined?

We can categorize a information exploration program according to the type of databases excavated. Data source program can be categorized according to different requirements such as information models, types of information, etc. And the information exploration program can be categorized accordingly.

For example, if we categorize a database according to the information model, then we may have a relational, transactional, object-relational, or information factory exploration program.

Classification Depending on the type of Knowledge Mined

We can categorize a information exploration program according to the type of information excavated. It means the information exploration product is categorized on the basis of features such as −

Characterization

Discrimination

Association and Connection Analysis

Classification

Prediction

Prediction

Outlier Analysis

Progress Analysis

Classification Depending on the Techiques Utilized

We can categorize a information exploration program according to the type of methods used. We can explain these methods according to the degree of user interaction involved or the methods of research employed.

Classification Depending on the Programs Adapted

We can categorize a information exploration program according to the applications tailored. These applications are as follows −

Finance

Telecommunications

DNA

Stock Markets

E-mail

Integrating a Data Mining System with a DB/DW System

If a information exploration product is not incorporated with a database or a information factory program, then there will be no program to connect with. Built is known as the non-coupling plan. In this plan, the main objective is on information exploration design and on developing effective and effective methods for exploration the available information sets.

The list of Incorporation Techniques is as follows −

No Combining − In this plan, the information exploration program does not utilize any of the database or information factory features. It brings the information from a particular source and processes that information using some information exploration methods. The information exploration outcome is held in another data file.

Loose Combining − In this plan, the information exploration program may use some of the features of database and information factory program. It brings the information from the information respiratory managed by these systems and works information exploration on that information. It then stores the exploration outcome either in data or in a specific place in a database or in a information factory.

Semi−tight Combining – In this plan, the information exploration product is linked with a database or a information factory program and in addition to that, effective implementations of a few information exploration primitives can be provided in the database.

Limited coupling − In this coupling plan, the information exploration product is efficiently incorporated into the database or information factory program. The information exploration subsystem is treated as one functional component of an information program. DBA Development Course is always there for you to make your career in this field.

Don't be shellfish...Digg thisBuffer this pageEmail this to someoneShare on FacebookShare on Google+Pin on PinterestShare on StumbleUponShare on LinkedInTweet about this on TwitterPrint this pageShare on RedditShare on Tumblr