I bumped into an interesting video from Y Combinator [1] related AI and future user interfaces. This reminded me about the book which I started but never finished: The Design of Future Things by Don Norman. Anyway, here are the notes from the video:
Questions
How will AI change the interfaces of the future?
How many milliseconds feels natural when talking to an AI?
How to handle human interruptions appropriately?
How to mitigate issues with latency?
What are the future ways to visualise and control the AI process? (E.g. canvas flow charts)
How to design human involvement into the use case?
How to determine the right abstraction level within the task? (E.g. complete autonomous control of email responses vs. template generation vs. helping with individual phrases)
Takeaways
Software of today is to things to point on the screen: nouns. AI changes it to verbs.
Entrypoint to AI interfaces is/was a chat box, but there are other ways to build user interfaces with AI native apps.
Latency is an issue, makes the interaction not human'ish.
Problem with open-ended prompts is that they can be anything. Many things can go wrong.
Canvas is an interesting way to showcase the process flows through.
Practical tips
Add sources to make answers trustworthy, can be granular and inline.
It's helpful to include example prompts near the input. Include list of domain specific terms to help prompting.
Consider showing what's happening behind the scenes.
Find ways to keep user engaged while prompt is processing.