IBM SPSS Decision Trees

Easily identify groups and predict outcomes

View US prices & buy

Free Trial

  • Overview
  • Features and Benefits

The IBM® SPSS® Decision Trees module (formerly called PASW® Decision Trees) helps you better identify groups, discover relationships between them and predict future events.

This module features highly visual classification and decision trees. These trees enable you to present categorical results in an intuitive manner, so you can more clearly explain categorical analysis to non-technical audiences.

IBM SPSS Decision Trees enables you to explore results and visually determine how your model flows. This helps you find specific subgroups and relationships that you might not uncover using more traditional statistics. The module includes four established tree-growing algorithms.

Use IBM SPSS Decision Trees if you need to identify groups and sub-groups. Applications include:

  • Database marketing
  • Market research
  • Credit risk scoring
  • Program targeting
  • Marketing in the public sector

Choose from four established tree-growing algorithms and discover hidden relationships in your data.

IBM SPSS Decision Trees provides specialized tree-building techniques for classification – entirely within the IBM SPSS Statistics environment. It includes four established tree-growing algorithms:

  • CHAID – A fast, statistical, multi-way tree algorithm that explores data quickly and efficiently, and builds segments and profiles with respect to the desired outcome
  • Exhaustive CHAID – A modification of CHAID, which examines all possible splits for each predictor
  • Classification and regression trees (C&RT) – A complete binary tree algorithm, which partitions data and produces accurate homogeneous subsets
  • QUEST – A statistical algorithm that selects variables without bias and builds accurate binary trees quickly and efficiently

With four algorithms, you have the ability to try different types of tree-growing algorithms and find the one that best fits your data.

Because you create classification trees directly within IBM SPSS Statistics, you can conveniently use the results to segment and group cases directly within the data. Additionally, you can generate selection or classification/prediction rules in the form of IBM SPSS Statistics syntax, SQL statements or simple text (through syntax).

You can display these rules in the Viewer and save them to an external file for later use to make predictions about individual and new cases. If you'd like to use your results to score other data files, you can write information from the tree model directly to your data or create XML models for use in IBM SPSS Statistics Server.

IBM SPSS Decision Trees diagrams, tables and graphs are easy to interpret. Use the highly visual trees to discover relationships that are currently hidden in your data (left). Use tree model results to score cases directly in IBM SPSS Statistics.

IBM SPSS Decision Trees diagrams, tables and graphs are easy to interpret. Use the highly visual trees to discover relationships that are currently hidden in your data (left). Use tree model results to score cases directly in IBM SPSS Statistics.