This post series summarises my current thinking, work and learnings around the topic of generative AI (GenAI) during the past year or so.
It is quite hard to approach GenAI as a UX designer and developer from one point of view. I have been exposed to GenAI as everyday user like many of us, especially people in the field. I am also a designer who designs solutions that utilises GenAI. On top of that I have used GenAI for development and also developer who uses APIs provided by AI vendors. These experiences offers many different topics to focus on. I have selected the ones that I consider the most interesting.
Let's start with some opinionated take on where we are at the moment.
GenAI is advertised to change the knowledge work forever. Everyone is affected and some of us will be replaced by the machines. The rest will work more effiently. However, increased productivity doesn’t necessarily benefit individuals — it often leads to higher demands rather than reduced workloads. For organizations, the belief in major productivity boost from GenAI stems from naive optimism overlooking the complexities of making it work in the real-world. I don't think high expectations are inheritly wrong, but they are not necessarily anchored in reality either.
Using GenAI to enhance knowledge work is still in its early stages. Many discussions assume its value is proven, which sometimes feels premature and driven by hype, wishes and hopes for ROI.
GenAI is being integrated into apps and services with the assumption that it makes it easier and faster for users to achieve their goals. It is unclear though if goals are met faster, easier or cheaper when the entire process is considered. More importantly, AI-assisted work hasn’t not outperformed traditional methods in quality. The speed increase in certain types of tasks is evident, but the overal impact is not that clear. In many cases the actual outputs are impressive, but not from the design point of view. For instance, for the needs of the design, being able to generate endless UIs for unclear problem is not enough. Externally it might seem like progress, but it can have serious downsides, the worst being addressing the wrong problem and building complex solutions on top of that.
GenAI is not yet understood. The adoption is not straightforward. When and why are not obvious. Practical challenges are also non-trivial. Using GenAI should not just rely on users' personal awareness on the critical topics such as data security. Systems should aim to make it apparent how the users' data is used and processed further. Organisations need to asses their current processes and practices as well as evaluate and asses how and when it is appropriate to use GenAI in their own operations. This is mandatory, it takes time, and it is usually not trivial.
Then there is programming. I personally believe manual coding with traditional "smart" IDEs remains more productive on demanding tasks than relying on LLMs in the long term. I don’t have data or research to back this up, but I suspect reliance on LLMs could result in greater cognitive drawbacks than anticipated — similar to the "Google effect," where easily accessible information reduces memory retention. Applying something that exist to something new is the essence of programming. Outsourcing that responsibility is detrimental. For individuals GenAI can be really harmful in that sense.
In any possible future scenario somebody needs to understand how things work.