Artboard 7 80

To get started in the UX/UI design process, we will spend the first half of the semester designing an app called TypeEverywhere. This fictional app essentially allows the user to take pictures of things – anything, really – around her and make typefaces from formal qualities found in these pictures. We don't need to worry about how the app does this, we just know that we want the app logic to use detected shapes in images to generate a typeface. Knowing this, our job as UX/UI is design the behavior, look, and feel of an app to best serve the user.



a.k.a brainstorming, the ideation phase of a UX/UI project is perhaps the most fun. This is where we see what other people have done in the same vein, list what we must have, what we'd like to have, what we shouldn't ever have, think about how a user interacts, and begin sketching. 


Precedents and competition

When one begins to design a UX/UI project, one should ALWAYS begin with solid research about the competition. Has something like it been done before? What did they do well? Where were they lacking? What do we want to keep, what should be thrown away?

There isn't anything like TypeEverywhere on the market as of this writing that does anything quite like this, and it may not even really be possible, but it also isn't a completely ridiculous idea either. Here are a few projects that are in some ways similar, and could perhaps even inform, our TypeEverywhere app:

Functionality Prioritization Exercise

We continue our ideation by talking about must have features, should-probably-haves features, and then nice-to-have features. For example, it is really essential for our user to be able to take a photo of an inspiring object. The app should-probably somehow allow the user to select specific areas of that image to focus on for subsequent shapes. Finally, it would be really nice for the app to give the user the ability to modify or simply these shapes for use in a typeface. In no particular order, here is a list of suggested TypeEverywhere features:

  • Project library
  • Reference image library
  • Ability to generate multiple iterations and use an evolutionary algorithm to arrive at a likeable typeface
  • Ability to manually select areas of an image for their shapes
  • Camera access, generates posterized representation of images for easy edge and shape detection
  • Ability to use multiple images for single typeface
  • Ability to assign specific, defected shapes to shape groups of letters, eg circular letters (O, C, P, Q, G, etc), triangular letters (A, V, W), square letters (T, H, M, X, Z), and odd ones (S, I, etc).
  • In app, very basic type desgin engine: App applies estimated stroke widths, contrast, stress, etc, but allows for these and other options (keening, spacing, serif/sans/slab, etc) to be tweaked for entire alphabet as well as individual letters.
  • Uses AI to apply optimization phase to typeface, ensuring proper coloring, weight, balance, harmony, etc.
  • allows for sharing on popular font repos
  • Contains a separate type glossary and catalogs with different classifications
    • This glossary is drawn from for tool tips that pop up over reference images when app is guiding the user in the creation process

We may come up with similar ones, new ones, better ones. But we need to order them by what's most important to least.


Task Modelling and user journeys

Once we have the basic requirements down, we should really begin planning out the many ways our users might use our app. As an example, we might at first describe people using Instagram thusly:

"You open Instagram, take a picture of you and your friend making duck lips on a beach, then post this image, then close Instagram."

And while this is certainly something that plenty of people do, given the abundancein my Insta feed of people making duck lip faces while on beaches, that's not the whole picture. Some people go on just to look at their duck lip friends; others go on to look at comments and likes on their own, stupid duck lip pics; still others go on just to change their privacy settings so that exes can't stalk them...and so on. 

These different usage narratives are called "user task models," and they will inform not only what screens we develop, but also how we get to those screens, and how they're all linked together.