English version is below.
Object-Oriented User Interface Design and avoiding user’s panic
Object-Oriented User Interface Design is a book by Manabu Ueno, only available in Japanese at the moment. I haven’t had the chance to get my hands on it, but it seems to be based on articles that he has written throughout the years. Here, I will be referring specifically to this one (in Japanese only). Object-Oriented Design started a while ago, Ueno mentions Alan Key and the IBM Object-Oriented Interface Design guidelines as precursors. This idea of considering parts of a program as “objects” has helped shape how systems are designed, but seems it can also be used for UI design, and even become a bridge between programming and design.
Thinking “human” first
Ueno opens his article with the example of a vending machine that he found once in a diner. This machine asked the user to put the money inside first and then select the set menu they wanted to buy. This design of money first, ticket later makes sense if you consider the usual drink vending machines that you can find everywhere in Japan: you first insert the coins or bill, and then you select the product you want to buy. However, this doesn’t make sense if you think about the regular process of buying something. Normally, if you go to a store, you don’t pay straight away before getting a chance to browse the products. You first take a look and after you decide to buy something, you hand out the money to the cashier. Ueno’s example of the diner’s vending machine also included a voice saying out-loud “please enter your money first” in Japanese, which seemed to drive away possible clients, especially panicked non-Japanese speakers.
Continuing with the example, Ueno mentions that more modern vending machines have started implementing payment through IC cards. These new machines actually let the user make the choice first. You press the product button first, and then confirm your payment by placing the card on the sensor. The first time I got to experience these, it actually felt a bit out of place as I got used to doing it the other way. Even if it doesn’t make sense, some designs have forcefully changed people’s way of thinking in favor of the machine’s needs.
The diner vending machine is an example of a functionality-oriented user interface. The functionality of these vending machines is to buy products, the focus goes on the verb “buy”. And in order to buy something, the first thing you would do is put the money into the machine. Another example is ATM machines, here you get a menu filled with verbs: withdraw, deposit, transfer, etc. On the other hand, object-oriented user interfaces focus on, well, objects. Not the verb but rather the noun.
So in the vending machines, you would want to first consider the “drinks”, “food” or whatever you are buying, and then perform the “verbs” on them. Probably in a vending machine you are stuck with “buy” though, but this idea of thinking first through the objects rather than your actions aims to make interfaces more natural for the users. You are presenting them with things that they can associate with the real world, including their properties and what actions they can perform with them. The user then has certain expectations regarding the objects you provide within the UI.
Ueno proposes a way of modeling the objects inside an interface by considering them as if they were classes of a program. Yukari Shingyouchi shows this process simplified on these slides (in English). The steps can be summarized as
- Define the objects: their attributes and what you can do with them.
- Make the views: what they would look like by themselves and as a group with other similar objects.
- Place them in a Layout: arranging them in context along with other objects, in order to show how they will be displayed and interact with each other.
Essentially this process is about making an abstraction of a real object and putting it in the context of your layout. This could be seen as some kind of “allegory”, a representation of real objects in the virtual space.
Some recurrent examples are email services and to-do lists (in Japanese). Both of these take a similar concept of a paper and a collection of papers, but each of them is endowed with different meanings. There are similarities in what you can do with these two: creating, editing, deleting, however, their properties are affected by their purpose. A to-do list will, most likely, have shorter text-only content, but you would like to be able to attach pictures and other things to your emails, similarly as you would expect to send packages in a real postal service. Being able to compare the objects to real-life examples means users would also have certain expectations to fulfill.
Abstractions from reality: two examples
Reading about OOUI reminded me of Marshal McLuhan; I didn’t have a clear reference at the time, but I remembered some bits about considering media as extensions of the body. A quick search will show you this idea of media as extensions of ourselves, but this can also be expanded to consider media as an extension of our physical surroundings. OOUI might be just that.
It also makes more sense for a person to associate the parts of a UI to real objects, both for considering their possibilities and their limitations. These associations can also help to bring focus when designing an application: as a user, you might expect the objects on the screen to behave in certain ways if they are presented as associations to real-world objects. Therefore, their functions would need to answer these implicit expectations.
Thinking about OOUI for applications might help guide developers; if you are using associations with real objects, users will expect them to behave as such. A clear example of OOUI can be found in programs used for graphic design, as mentioned here. For example, I use software for digital painting regularly, and once I considered this example in the context of OOUI it all made more sense to me. In this kind of software, you are presented with the allegory of an artist’s studio, along with abstractions of the objects you can find there: you have a canvas, pencils, brushes, and other drawing tools that you would expect to behave similarly to the actual objects. Of course, they all come with certain quirks as well as additional advantages. Things like automatically correcting a line you trace so it looks neater, or the perpetually abused Ctrl+Z. However, essentially the software is successful if it manages to give the user the experience of having a drawing tool collection at their disposal.
Other fun examples can be found in video games. Now, video game design might be a whole different monster on its own, but some games benefit from having the objects inside them behave as you would expect them to. Let’s think for now about video games with well-crafted puzzles and balanced physics engines. Think Zelda: Breath of the Wild; anything you might think that could logically be done with an object is, mostly, possible. For example, just having the possibility of handling fire inside the game, gives the user an expectation on how to affect objects around. If you hunt a critter in this game you will get its meat, but if you do it with a fire weapon, you will get roasted meat. Of course, the game is making its own rules too, but the expectation of the user/player is fulfilled. Also, in virtual environments, the “object” takes a whole different meaning as it goes beyond being a simple icon on the screen. Possibly, it’s easier to imagine what it could do (certainly not easier to program); if this object is still an abstraction of a real thing, it will surely behave a bit differently in the context of the rest of the UI, and in this case, the game world.
OOUI benefits the design of applications at its core. It encourages us to think about what the user would be able to do with the tools they have at their disposition and how they would be expecting them to behave. This can be helpful to focus on the design of an application in order to fulfill users’ expectations and make it relatable enough so that users won’t have too much trouble using it. It possibly can also help to bring the members of a developing team to the same page, as now the team would be speaking in a common language of real-world objects, and can refer to actual things and experiences with them.