Information Design Theory


Facilitating attention, facilitating perception, facilitating processing, and facilitating memory are important cognitive principles in information design.
[7TheoriesID, p.824]

Active voice, attention, clarity, comprehensibility, consistency, emphasis, information ethics, legibility, memory, perception, precision, processing, quality, readability, reading value, simplicity, structure, and unity are all key concepts in information design.
[IDTheories, p.10]

Providing structure and providing simplicity are two of the functional principles in information design.
[IDTheories, p.15]

A message is information content conveyed from a sender to a receiver in a single context on one occasion.
[IDTheories, p.14]

The first century architect, author, and engineer Marcus Vitruvius Pollio presented three principles for good architecture in his book De architectura. A structure must exhibit the three qualities of firmitas, utilitas, venustas – that is, it must be solid, useful, beautiful. These are sometimes termed the Vitruvian virtues or the Vitruvian Triad. Lankow (2012) transformed these principles for design of information graphics. The first principle is soundness. It refers to whether the information presented is complete, correct, and valuable to the viewer. The second principle is utility. It refers to whether the design meets the designer’s objectives or not. The third principle is beauty. It refers to whether or not the design is appealing and appropriate or not.
[IDTheories, p.17]

Tufte (1990) argued that principles of information design are universal. Like mathematics ID-principles are not tied to the unique features of a particular language, nor are they tied to a particular culture.
[IDGuidelines, p.168]

Defining the specific categories is crucial, as they will communicate the designer’s prejudices and understandings more easily than any other organization.
[InfoInteractionDesign, p.6]


Acquisition of knowledge involves complex cognitive processes, such as attention, perception and learning. These processes are influenced by our earlier experiences and our memories. Groups of brain cells are activated and associate to each other. Information is converted into experience and insight.
[IDTheories, p.36]

Rouet et al. (1995) noted that although a large number of studies have been devoted to the cognitive processing of single text passages, far less is known about the comprehension process in using multiple documents for learning.
[IDTheories, p.44]

Learning styles are affective, cognitive, and physiological traits that serve as relatively stable indicators of how learners perceive, interact with, and respond to the learning environment.
[IDTheories, p.45]

Gardner (1983) found that we each have at least seven different types of intelligence. Two of these types, 1) linguistic intelligence and 2) logical-mathematical intelligence, are very highly valued in traditional education. The other five intelligences are called 3) interpersonal intelligence, or social intelligence, 4) intrapersonal intelligence, or introspective intelligence, 5) kinaesthetic intelligence, or physical intelligence, 6) musical intelligence, and 7) spatial, or visual intelligence.

People with visual intelligence create mental images, use metaphors, and have a sense of gestalt. They like to engage in drawing, painting, and sculpting. These people can easily read maps, charts, and diagrams. This is the kind of ability used by architects, chess players, naturalists, navigators, painters, pilots, and sculptors.

There are, however, many opinions with respect to learning styles (Coffield et al., 2004). Some argue that there is a lack of evidence to support the view that matching teaching and learning styles is educationally significant (Geake, 2008).
[IDTheories, p.46]

Information is moved from sensory memory to short-term memory. Selected bits of information are stored in a “text base,” and in an “image base” respectively. Then learners build connections between verbal and visual representations. This is best done when the text and the illustrations are actively held in memory at the same time. This can happen when text and illustrations are presented in close connection on the same page, or when learners have sufficient experience to generate their own mental images as they read the text.
[IDTheories, p.49]

Based on the dual coding theory, the cognitive load theory, and the constructivist learning theory (Mayer, 1997) proposed a cognitive theory of multimedia learning, and argued that active learning occurs when a learner engages three cognitive processes: 1) selecting relevant words for verbal processing, 2) selecting relevant images for visual processing, and 3) organizing words into a coherent verbal model and organizing images into a coherent visual model, integrating corresponding components of the verbal and visual models. Moreno and Mayer (2000) presented six instructional design principles for this theory.

  • Split-attention principle: “Students learn better when the instructional material does not require them to split their attention between multiple sources of mutually referring information.”
  • Modality principle: “Students learn better when the verbal information is presented auditorily as speech than visually as on-screen text both for concurrent and sequential presentations.”
  • Redundancy principle: “Students learn better from animation and narration than from animation, narration, and text if the visual information is presented simultaneously to the verbal information.”
  • Spatial contiguity principle: “Students learn better when on-screen text and visual materials are physically integrated rather than separated.”
  • Temporal contiguity principle: “Students learn better when verbal and visual materials are temporally synchronized rather than separated in time.”
  • Coherence principle: “Students learn better when extraneous material is excluded rather than included in multimedia explanations.”

Moreno and Mayer (2000) concluded that presenting a verbal explanation of how a system works with an animation does not insure that students will understand the explanation unless research-based principles are applied to the design. Multimedia presentations should not contain too much extraneous information in the form of sounds or words.
[IDTheories, p.49-50]

Perception is always organized. We see whole images rather than collections of parts. The whole is different from the sum of the parts.
[IDGuidelines, p.178]

We require a context to understand the meaning and importance of facts. It’s often easier to remember a story than to remember raw data.
[ID4Advocacy, p.24]


Memory is greater when a verbal and a visual code are activated at the same time, rather than only one of them. The image is centrally important in facilitating long-term retention, at least for adults (Paivio, 1991). It is also known that our memory for pictures is superior to our memory for words (Adams & Chambers 1962). This is called the pictorial superiority effect (Branch & Bloom, 1995). Careful integration of words and pictures engage people more effectively than words or pictures alone (Sadoski & Paivio, 2001).
[IDTheories, p.46-47]

Information that is shared between sensory channels will facilitate learning. Cues that occur simultaneously in auditory and visual channels are likely to be better recalled from memory than those cues presented in one channel only.

Levie and Lentz (1982) found that conveying information through both verbal and visual languages makes it possible for learners to alternate between functionally independent, though interconnected, and complementary cognitive processing systems.
[IDTheories, p.48]

Using a large number of visual examples Malamed (2009) offers designers six principles for creating graphics and visual language that people may understand. These principles are called 1) Organize for perception. 2) Direct the eyes. 3) Reduce realism. 4) Make the abstract concrete. 5) Clarify complexity. 6) Charge it up.
[IDTheories, p.65]

Development of visual language abilities is dependent upon receiver interaction with objects, images, and body language. The same visuals are not equally effective for receivers with different prior knowledge. Images and visual language speak directly to us in the same way experience speaks to us: holistically and emotionally.
[IDGuidelines, p.179]

Memory for pictures is superior to memory for words. This is called the “pictorial superiority effect”. Visuals can strengthen language fluency by enhancing memory and recall, as well as providing a visual schema in which information can be organized and studied.
[IDGuidelines, p.180]


When using plain language the intended audience will understand the message the first time they read or hear it. However, language that is “plain” to one group of readers may not at all be easy to understand for other audiences. This means that in material written in plain language the intended audience can:

  • Find what they need.
  • Understand what they find.
  • Use what they find to meet their needs.

There are many writing techniques that can help you achieve this goal. Such writing techniques include active voice, easy-to-read design features, everyday words, and short sentences.
[IDTheories, p.60]

Presumably textual material does not consist of a string of coordinate units, but has a complex hierarchical structure. When this structure is better understood, typographical and other cues may be applied with greater objectivity and efficiency.
[TypographyComprehension, p.228]

The study of the organization of thought in reading is prerequisite to objective and efficient utilization of typographical and other cues.
[TypographyComprehension, p.228]

Audience Analysis

When faced with trying to understand how to design and write for complex information systems, some of our familiar approaches come up short. For example, existing persona literature and templates do not fully account for users with disabilities nor do they consider localized cultural issues (such as regional variations in language use). Moreover, existing models for persona creation do not take into account the increasing mobility of users and what effect that has on design.
[EmbodiedPersonas, p.53]

Technical communication involves people who have feelings. The information and knowledge that technical communicators work with is often mediated through technology. For people with disabilities and also those experiencing emotional distress, technology often enables mediation of their interaction with the world around them, including technical communication. And embodiment often means more to people with disabilities. They have often been forced to pay more attention to their bodies than is required of able-bodied people, and they are often prevented from succeeding by our failure to consider their needs.
[EmbodiedPersonas, p.56]

Mobility, thus, means taking into full account the environment or location where users will be using, sending, or viewing the product or information and the fact that this environment changes as the user moves. In a growing number of cases, that environment or location is mobile. From tablets to mobile phones to smart phones to netbooks, much of the information is delivered while the user is in constant motion.
[EmbodiedPersonas, p.56]

One way to address audience concerns as users of complex systems is to shift the emphasis away from the characteristics of the audience and to focus more specifically on what the users will be doing. This focus on doing allows a more sustained and deliberate focus on the user’s goals and moves technical communication beyond simply focusing on task analysis.
[EmbodiedPersonas, p.57]

Usability and technical communication have always been a combined role. Thus, technical communicators need to intercede and use their audience analysis skills to craft multi-dimensional, embodied personas.
[EmbodiedPersonas, p.59]

First developed by Alan Cooper (1999) to help create usable software, personas have evolved and been adapted to be useful tools for a variety of products, applications, websites, and interactive systems. From Cooper’s original description, others have elaborated on the idea in more comprehensive ways (e.g., Adlin & Pruitt, 2010; Goodwin, 2009; Mulder & Yaar, 2007). Personas provide the design or product team a way to envision users of their end product; they help to communicate key user requirements to all members of a project team; they provide a key orienting device throughout the project to keep members on the same page; they provide a useful way of communicating decisions to internal and external audiences; and they are a key component in helping to structure appropriate and usable interfaces, designs, and information.
[EmbodiedPersonas, p.52]

Cooper, A. (1999). The inmates are running the asylum: Why high-tech products drive us crazy and how to restore the sanity. Indianapolis, IN: Sams Publisher.
Adlin, T., & Pruitt, J. (2010). The essential persona lifecycle: Your guide to building and using personas. Burlington, MA: Morgan Kaufmann.
Goodwin, K. (2009). Designing for the digital age: How to create human-centered products and services. Indianapolis, IN: Wiley Publishing.
Mulder, S., & Yaar, Z. W. (2007). The user is always right: A practical guide to creating and using personas for the web. Berkeley, CA: New Riders.

A user-centered design process starts with lots of questions, rather than answers. The key is identifying the user’s perspective at the outset.
[ID4Advocacy, p.21]

Top of Page