Artboard 7 80

To get started in the UX/UI design process, we will spend the first half of the semester designing an app called TypeEverywhere. This fictional app essentially allows the user to take pictures of things – anything, really – around her and make typefaces from formal qualities found in these pictures. We don't need to worry about how the app does this, we just know that we want the app logic to use detected shapes in images to generate a typeface. Knowing this, our job as UX/UI is design the behavior, look, and feel of an app to best serve the user.

 

Ideation

a.k.a brainstorming, the ideation phase of a UX/UI project is perhaps the most fun. This is where we see what other people have done in the same vein, list what we must have, what we'd like to have, what we shouldn't ever have, think about how a user interacts, and begin sketching. 

 

Precedents and competition

When one begins to design a UX/UI project, one should ALWAYS begin with solid research about the competition. Has something like it been done before? What did they do well? Where were they lacking? What do we want to keep, what should be thrown away?

There isn't anything like TypeEverywhere on the market as of this writing that does anything quite like this, and it may not even really be possible, but it also isn't a completely ridiculous idea either. Here are a few projects that are in some ways similar, and could perhaps even inform, our TypeEverywhere app:

Functionality Prioritization Exercise

We continue our ideation by talking about must have features, should-probably-haves features, and then nice-to-have features. For example, it is really essential for our user to be able to take a photo of an inspiring object. The app should-probably somehow allow the user to select specific areas of that image to focus on for subsequent shapes. Finally, it would be really nice for the app to give the user the ability to modify or simply these shapes for use in a typeface. In no particular order, here is a list of suggested TypeEverywhere features:

Must Have Features

  • Take a picture (access device hardware, camera)
  • Object recognition: User-driven and automatic, generates posterized representation of images for easy edge and shape detection
  • Makes functional typefaces  
  • Evolutionary versioning/iterations
  • In app, very basic type design engine: App applies estimated stroke widths, contrast, stress, etc, but allows for these and other options (keening, spacing, serif/sans/slab, etc) to be tweaked for entire alphabet as well as individual letters.
  • Learns user preferences 
  • Different snapshots
  • Export industry standard typeface formats (OTF, TTF, SVG) 
  • Export/upload to well-known type repositories/sites 
  • Automatic autosave vault
  • Hierarchical project library and file management with nesting 
  • Source image library 
  • Sharing/Cloud access (DropBox, Drive, Box) 
  • Open source font format exportation to other mobile and desktop apps
  • Interactive glossary or type reference library
    • This glossary is drawn from for tool tips that pop up over reference images when app is guiding the user in the creation process

Nice-to-haves

  • Ability to use multiple images for single typeface
  • Ability to assign specific, defected shapes to shape groups of letters, eg circular letters (O, C, P, Q, G, etc), triangular letters (A, V, W), square letters (T, H, M, X, Z), and odd ones (S, I, etc).
  • Social media community based around TypeEverywhere users
  • Typeface Family or Category suggestions to build off of
  • Themes, moods, etc.
  • AI optimization [a la Hoelfer Obsidian]
  • Versioning or version history
  • Shake to shuffle typefaces
  • Teach basic type anatomy to users
  • Interactive user guide
  • Quick, easy, fun

Task Modelling and user journeys

Once we have the basic requirements down, we should really begin planning out the many ways our users might use our app. As an example, we might at first describe people using Instagram thusly:

"You open Instagram, take a picture of you and your friend making duck lips on a beach, then post this image, then close Instagram."

And while this is certainly something that plenty of people do, given the abundancein my Insta feed of people making duck lip faces while on beaches, that's not the whole picture. Some people go on just to look at their duck lip friends; others go on to look at comments and likes on their own, stupid duck lip pics; still others go on just to change their privacy settings so that exes can't stalk them...and so on. 

These different usage narratives are called "user task models," and they will inform not only what screens we develop, but also how we get to those screens, and how they're all linked together. Below are a few that I have thought of. Think of them as little stories, each titled after the main, narrative thread. For example:

 

The New Project:

  1. Sees something inspiring
  2. Opens app
  3. Clicks “new project” which opens up capture screen (camera). 
  4. Takes picture 
  5. Option to selects and adds area(s) of interest,( or take another image, or move onto project)
  6. App asks what each area should be considered for in terms of type anatomy (ascender, crossbar, serif, terminal, bowl, shoulder, etc), but user selects “decide later”
  7. Returns to project screen, user asked to begin with a source image
  8. App enters edit reference image screen and asks what area(s) should be considered for (ascender, crossbar, serif, terminal, bowl, shoulder, etc)
  9. User chooses areas and sets choices for anatomy
  10. App logic suggests a few default iterations of typefaces (serif, sans, geometric, script, etc) based on shape areas in reference image.
  11. User asked to shake device to re-iterate the selections or select one she likes.
  12. User shakes phone and selections reiterate
  13. Customization allows user to modify, select or deselect type options. 
  14. User saves snapshot.
  15. User exports typeface with options (OTF, SVG, TTF, etc).
  16. User shares typeface with client or friend.
  17. User saves project and exits app.

The Old project

  1. User opens app
  2. User goes into project library 
  3. Selects existing project
  4. Presented with options to create new snapshot from image, modify existing snapshot, or export existing snapshot, as well as edit existing reference images, add reference images,
  5. User edits existing reference image
  6. Selects and additional area of interest. 
  7. App asks what area should be considered for in terms of type anatomy (ascender, crossbar, serif, terminal, bowl, shoulder, etc)
  8. User makes anatomy choices with tooltip overlay
  9. App logic suggests a few default iterations of typefaces (serif, sans, geometric, script, etc) based on new shape areas in reference image.
  10. User asked to shake device to re-iterate the selections or select one she likes.
  11. User selects one she likes.
  12. Customization/modification screen allows user to modify, select or deselect type options. 
  13. User saves snapshot.
  14. User saves project and exits app.

Modify existing snapshot

  1. User opens app
  2. User goes into project library 
  3. Selects existing project
  4. Presented with options to create new snapshot from image, modify existing snapshot, or export existing snapshot, as well as edit existing reference images, add reference images
  5. User selects modify existing snapshot
    • User asked to edit original or copy to new
  6. Snapshot Customization/modification screen allows user to modify, select or deselect type options, change size, view as
  7. If original edited, User given the choice to save snapshot over original or to produce a copy
  8. User saves snapshot as unique copy.

Add reference image (slow)

  1. User opens app
  2. User goes into project library 
  3. Selects existing project
  4. Presented with options to create new snapshot from image, modify existing snapshot, or export existing snapshot, as well as edit existing reference images, add reference images
  5. User selects “add reference image,” which opens up capture screen (camera). 
  6. Takes picture 
  7. Presented with Options to select or adds area(s) of interest,( or take another image, or move onto project)
  8. App asks what each area should be considered for in terms of type anatomy (ascender, crossbar, serif, terminal, bowl, shoulder, etc), but user selects “decide later”
  9. Doesn’t save project, just exits app (project saves automatically)

Quick add reference image

  1. User opens app
  2. Clicks “add image,” which opens up capture screen (camera). 
  3. Takes picture 
  4. Presented with Options to select or adds area(s) of interest,( or take another image, save to image library, or save to new project
  5. User saves to image library
  6. Exits app

 

 

%MCEPASTEBIN%