POSTS

POSTS

POSTS

testing

Feb 23, 2026

testing for linked

this is code block

import

this is code block

import

this is code block

import

Pomodoro Sand Timer

Feb 22, 2026

The Pomodoro Technique is simple and practical: 25 minutes of focused work, followed by 5 minutes of rest. A structured work–pause–repeat rhythm that helps you commit to focused time and move tasks forward.

What never felt quite right to me was how it shows up in both the interaction and the interface.

You have to tap every time. Start, stop, restart. Each session begins with a deliberate action before you can actually focus. Repeated throughout the day, that small trigger becomes friction. And once it starts, the dominant visual is a shrinking number. The screen centers the countdown. Instead of feeling contained within a block of focus, it can feel like you’re watching time being consumed.

So…

What if the buttons were removed?
What if it didn’t show the timer?

Here I'm exploring an idea with an M5StickC Plus 2. It’s small and self-contained, so the screen is limited. Whatever appears on it has to be deliberate. There’s no space for layered controls or secondary information.

It also has built-in motion sensors that detect orientation and movement. That means interaction doesn’t have to depend on tapping. The device responds to how it’s positioned. Upright, flat, flipped. The interaction becomes spatial instead of purely screen-based.

The Concept

Choose 25 or 50 minutes.

Stand the device upright, and sand begins to fall. The device behaves like a sandglass.

Grain by grain, one half empties while the other slowly fills. Under the surface, the sand follows simple gravity and collision logic.

The numeric timer only appears in the last three minutes. I didn’t want precision to dominate the entire session. Early on, the goal is immersion. As the session nears its end, I think showing the remaining time can support closure and help you wrap up or prepare to pause.

Interaction Model

From an interaction perspective, this became an exploration of digital and physical alignment. The interface is digital, but the interaction is physical. I wanted the behavior to work the way a sandglass works. When the device stands upright, sand falls. When it lies flat, the flow stops. When it’s flipped, the cycle begins again. The orientation of the object defines the state of the system. There are no buttons cycling through modes and no extra UI layers explaining what’s happening. The physical gesture directly controls the digital simulation.

Reflection

After spending some time with it, a few things stood out to me.

  1. Representation shapes experience
    The structure of Pomodoro didn’t change, only the interface did. Numbers felt rigid to me. They constantly reminded me that time was ticking down. With sand, the timer fades into the background. It feels less like watching something disappear and more like watching something flow. The underlying system stays the same, but the emotional framing shifts completely.

  2. Physical intuition reduces friction
    This experiment made me wonder whether physical intuition reduces friction. When a digital system behaves the way a physical object behaves, it requires less explanation. I don’t have to think about it as much.

  3. Feeling is a differentiator
    The structure stayed the same. The logic stayed the same. The function stayed the same. But the experience was different. In a world where building functional products is easier than ever, function alone doesn’t make something stand out. Feeling does. This exploration became an exercise in designing for that.

Resources

  1. You can get the code here: https://github.com/thebuddyman/m5-playground/blob/main/apps/pomodoro_sandglass_app.py

  2. Falling sand mechanics are inspired by https://jason.today/falling-sand

  3. Music used: reset, restart, focus - the cozy lofi

Paper Design App. Exploring Shader Tools

Feb 17, 2026

A new design tool on the block: Paper. It’s been on my radar, and recently I gave it a try for designing UIs. What caught my attention were the shaders. They’re basically filters, effects that you can apply to an image or generate based on provided parameters.

For example, I tried the Fluted Glass image filter and adjusted its parameters to explore different visual variations. What makes it terrific is its real-time processing.

And it’s animated. We can set the animation speed to 0% and download different variations of the image.

Here are a few results from a quick test. Feel free to download them if you’d like.

M5: First Encounter

Feb 16, 2026

It’s all started when I saw a tiny, saturated yellow rectangular device in this LinkedIn post. I was intrigued by this retro-looking piece. A creative technologist used it to create a pocket-sized AI assistant for his kid. It records a five-second audio clip, sends the query to OpenAI, then shows the answer on a screen.

A man used it to create a pocket-sized AI assistant for his kid. It records a five-second audio clip, sends the query to OpenAI, then shows the answer on a screen.

A week later, voila, I ordered one myself from Ali Express. It took almost 10 days to arrive, though (Amazon is faster, but €15 more expensive). Still, my inner kid erupted with joy when I opened it. I couldn’t wait to tinker with it and try out a few ideas.

M5Stick, in a nutshell, it's a tiny programmable hardware platform for IoT prototyping, featuring built-in sensors, a display, input controls, and wireless connectivity. For example, you can prototype a wearable step counter or activity tracker.

There’s an innovation studio in the UK, that I’ve always been admired of their work to combine digital experience and physical product, called Special Project. Although exploring with the M5StickC isn’t really comparable to designing a full physical product. It’s just a different medium. But it lets me step outside the mobile/web app space and start thinking in new interaction formats. I’m hopeful this is the beginning of experimenting with something bigger beyond the M5StickC.

Testing an event trigger for the accelerometer in UIFlow2 drag-and-drop IDE

First things first, to get started I relied on Gemini to learn the basics, such as installing (burning) the firmware, connecting to WiFi, and finally getting started with programming. I chose UIFlow2 over Arduino, as UIFlow2 is easier to use since it combines a drag-and-drop IDE with coding. UIFlow2 code is based on Python. I never learned Python before. But I learned C and C++ back in uni. I understand the core programming concepts, but, mostly, I will rely on an LLM to help me generate the code.

I skipped the most common first lesson, the “Hello World” print. Instead, I used drag and drop to build a tilt status and test how to add visuals and handle an event. In this case, an event is triggered by the accelerometer, which detects when the device is tilted, and another event is triggered when the device tilt passes a certain value and adds to the counter.

UI Editor in UIFlow2 IDELogic Editor in UIFlow2 IDE

Next, I created the Gimme-a-Pun app. I connected to an API and used the code editor.

At the beginning, I was using the drag-and-drop IDE, but I couldn’t find the API integration block. I tried looking for documentation and prompting AI, but no luck. I course-corrected a couple of times as the AI hallucinated a lot with references to UIFlow version 1. I jumped into the code editor.

I had an idea: every time the main button is pressed, it gives you a pun. I used a public endpoint: https://official-joke-api.appspot.com/random_joke

I generated the code using Gemini Pro. Other than a text-wrapping issue, it functioned as intended. I couldn’t find a function I could call immediately. I saw an Arduino example, but I wanted to keep it in Python on UIFlow for now. I prompted a few more times and got a decent solution after giving it a reference from here: https://docs.python.org/3/library/textwrap.html

Check this out…

Some takeaways

Beyond being a hobby project, as a (digital) product designer it feels like a new playground to exercise:

  1. Designing within tight constraints. A small screen, three buttons, no keyboard, no mouse. What can I create without them?

  2. Thinking in a different medium. An opportunity to explore solutions beyond a mobile app. A what-if for a tool that does not live on a phone.

  3. Mashing up my design and prototyping skills with code. It does not fully feel like coding since I am not writing much syntax, but it is not pure vibe coding either. It sits somewhere in between. Still, it’s promising. It gives me another tool I can reach for when something is better prototyped in code.

I’m looking forward to exploring more ideas.

If you’re also playing with this tiny tool, I’d love to hear from you.

The skills that become more essential in the age of AI

Oct 5, 2025

As part of Hyper Island’s Industry Research Project (IRP), I looked at how junior UX designers should adapt in an AI-accelerated industry, and which core abilities stay resilient as execution-heavy work gets automated and commoditised.

When I started the project, I focused on the disappearance of entry-level UX design jobs, an issue that felt increasingly relevant. The causes are mixed: a slow economy, global uncertainty, and although AI isn’t the main reason, it’s still eating up entry-level tasks or making them easier.

In design, it’s not the entry-level tasks that are most affected, but the early exploration ones: generating first drafts, alternative layouts, or draft UX copy. And it’s only getting better over time, relentlessly devaluing a human designer. A founder or business-minded person might think, why do we need to hire a human designer if we can just subscribe for 20 bucks? Or, we just need one conductor to orchestrate LLMs to design, code, and write copy.

Sitting with that reality pushed me into two different states of thinking.

One was defensive. It’s the mode where you look for what AI can’t do, what it can’t replace. I found things like curiosity, judgment, empathy, connecting ideas, systems thinking, and problem solving. Are these new skills? Not really. They’ve just been buried under conversations about pixel-perfect work, design systems, and cool micro-interactions.

The other was more opportunistic. It’s seeing AI in a more positive light, as a real opportunity to make our work better. When I interviewed leaders and designers who use AI, a recurring theme emerged:
“AI is just a tool.”
“It’s only as good as the person using it and that person’s depth in their domain.”
Good judgment complements the use of AI, and to develop good judgment, you need other skills: synthesis, critical thinking, systems thinking, and the ability to frame or reframe a lens.

Looking at it from both sides, the need to protect and the chance to grow point to the same thing: human skills. The very abilities that make us human are what make us more valuable and make the use of AI more effective.

Image credit: Luke Skywalker and C-3PO in Star Wars: A New Hope (1977) © Lucasfilm / Disney.

Is AI coming to UX design jobs? I scanned 554 UX design openings globally—here’s what I found

Sep 18, 2025

I was curious how often today’s UX job openings mention AI. It reminded me of almost 10 years ago, when design systems became a buzzword and “design system” started showing up in job requirements in all kinds of shapes.

Now the new buzz is AI. I scraped UX job openings on LinkedIn and Indeed (thanks to this repo). Alongside AI mentions, I also looked at how many roles are entry-level.

Updated from 576 to 554 after removing postings where missing cells were unintentionally counted.

Here’s what I found:

Global

  • ~11.19% mention explicit AI requirements

  • ~15.88% target entry-level (up to 2 years)

  • ~1.62% mention AI requirements on entry-level roles

US (New York, SF, Seattle)

  • ~9.26% mention explicit AI requirements

  • ~15.03% target entry-level (up to 2 years)

  • ~0.62% mention AI requirements on entry-level roles

Notes

  1. Job postings were taken within a 30-day window starting August 8, totaling 554 postings across 12 cities.

  2. Data came from LinkedIn and Indeed, focusing on UI/UX and product design roles.

  3. Entry-level included postings that accept 0–1 year of experience.

  4. Other platforms are missing, such as Glassdoor, Google, and company career pages.

Recurring themes in the listings

  • Curiosity/interest in AI: phrases like “curiosity about AI,” “openness to AI tools,” “deep curiosity,” and “enthusiasm for AI” appear 5–6 times.

  • AI tools (explicit mentions): direct references to tools (ChatGPT, MidJourney, Galileo AI, Uizard, etc.) appear 10+ times. Variants like “AI-driven tools,” “AI-assisted design tools,” and “AI prototyping tools” show up consistently.

  • AI-driven workflows / integration: mentions of integrating AI into workflows, “AI-driven workflows,” or “transforming processes with AI” appear 5–6 times.

  • AI product specialization: knowledge of AI product domains (conversational UX, personalization features, data-heavy AI/insights products) appears 3–4 times.

Three ways AI shows up in job requirements

  • AI literacy / awareness / curiosity: general understanding, openness, staying updated

  • AI tools fluency: hands-on use of AI-assisted design/prototyping platforms, plugins, workflows

  • AI product specialization: designing AI-powered features or products

Final Notes

I spoke with Roger Wong, who recently wrote about the design talent crisis and the vanishing bottom rung. He told me the evidence isn’t clear that AI is the reason entry-level roles are disappearing, but the bottom rung has been thinning for a while, and AI may be accelerating the squeeze.

He also suspects job postings don’t capture the full reality. In practice, there’s often a gap between recruiters and hiring managers, and recruiters may reuse older requirement templates. That makes AI in job requirements an imperfect signal. Even when it isn’t listed, the workflow is shifting, and the way designers work is already changing.

NOW PLAYING ON MY DESK

Loading...

© 2026.

NOW PLAYING ON MY DESK

Loading...

© 2026.