Through the Glass: Designer’s Perspective on WWDC 2025

Explore WWDC 2025 through the lens of top GeekyAnts UI/UX designers as they unpack Apple’s Liquid Glass, visionOS, and the evolving role of AI in design.

Author

Boudhayan Ghosh
Boudhayan GhoshTechnical Content Writer

Date

Jul 11, 2025
Through the Glass: Designer’s Perspective on WWDC 2025

Book a Discovery Call

Recaptcha Failed.

Editor’s Note

This blog is based on an internal design roundtable hosted at GeekyAnts, featuring UI/UX designers Robin Mathew, Vineeth Kiran, and Raksha S. The discussion covered Liquid Glass, interaction patterns in visionOS, performance trade-offs, and gaps in design tooling. It also touched on the evolving role of AI across Apple’s ecosystem and what these changes might mean for designers, developers, and end users. This blog presents a distilled version of that session.

Apple’s WWDC 2025 keynote unfolded like a controlled study in visual experimentation. The presentation introduced a series of deliberate shifts—some surface-level, others structural. At the centre was a visual system called Liquid Glass, a new layer of interaction built on refraction, elasticity, and fluid movement.

Vineeth Kiran, Raksha S, and Robin Mathew, UI/UX designers at GeekyAnts, watched the event closely and recorded their reactions. What emerged was not a rundown of features, but a layered reading of material, interface, and direction—what had changed, what felt deliberate, and what resisted understanding.

Liquid Glass and the Edges of Material Design

The Liquid Glass interface was the dominant visual signature of the keynote. It was animated with tension. UI blocks swelled and contracted as if suspended in fluid. Text refracted as elements moved beneath it. One designer described it as the feeling of water on glass—a visual behaviour more than a decorative filter. Another noted its structure: soft at the edges, rubber-like, almost elastic in its visual rhythm.

None of them saw it as “glassmorphism.” That comparison was dismissed early. The distortion here was physical. It mimicked actual light bending and object displacement. The implementation showed clear effort in microphysics, not aesthetic layering. What concerned them more was not what it looked like, but whether it could function across unpredictable environments.

Mixed backgrounds caused problems. When Liquid Glass passed over content with mid-tone contrast or layered video, text clarity fell apart. The system had rules, but many real-world contexts ignored those rules. The visual field became unstable.

The underlying principle—dynamic adaptation—was understood. Foreground elements adjusted their tone based on what lay behind them. However, this system, in practice, had gaps. Accessibility suffered in edge cases, and no solution had been presented for those.

Performance came next. Animating distortion across layers increased GPU load. The fluidity came at a cost. While keynote demos showed precision, these designers questioned how reliably this would translate across the iOS ecosystem.

Designing What Cannot Be Designed

There was an immediate technical frustration: this cannot be prototyped in Figma. The team discussed how none of today’s standard design tools supported real-time distortion, interactive light refraction, or layered visual liquidity. There was no way to replicate what Apple had built.

That absence was more than technical. It exposed a directional shift. Design tooling had, until now, evolved in parallel with platform behaviour. With Liquid Glass, the platform had pulled ahead. Designers would be asked to build experiences they could not simulate.

There was speculation that plugins might emerge. Perhaps Figma would adapt. But for now, this was an interface language inaccessible to those tasked with extending it.

Space and Orientation

VisionOS expands how interface elements behave in a physical context. Screens are no longer the only place where interactions occur. Components respond to spatial placement, align with surfaces, and adjust based on the environment.

In one demonstration, the weather widget was positioned on a wall. Once placed, it revealed a view showing outdoor conditions. The action responded to the physical surface, not the screen. Placement shaped what the interface did, and the wall became part of the interaction.

FaceTime introduced a different use of orientation. In addition to updated avatar rendering, the system allowed one user to shift into the other person’s point of view. This altered how presence was structured inside the conversation. Interaction was built around mutual perspective, not a fixed display.

Together, these features show a direction where space is not visual context but functional input. Interfaces change depending on where they are, what they face, and how they relate to the environment around them.

Fragmented Intelligence

The integration of ChatGPT into Apple’s ecosystem brought new capabilities, but also surfaced a gap in interface logic. The system can now process screenshots, summarise content, and respond to natural language queries. These actions are handled through ChatGPT, not Siri.

Siri still managed simple tasks—reminders, conversions, alarms—but had not been extended to the new functions. One designer pointed out that asking Siri for a recipe still opened a browser tab. Another noted that screenshot queries now bypass Siri entirely. The interface sent them straight to GPT. These moments exposed a deeper issue. There was no unified layer guiding how intelligence was surfaced. The system responded in parts, but there was no point of contact that represented the whole.

The group returned several times due to the absence of a unified system agent. Siri had not absorbed the new capabilities, and ChatGPT operated as a separate layer. This left no clear entry point for interaction. Intelligence was distributed across features, but the OS did not present it as a system. That design decision shaped how the tools were understood—not as extensions of a single assistant, but as isolated responses to specific tasks.

Developer Adjustment

Several interface behaviours were redefined. UI alignment, motion thresholds, and visual layering had been updated across the system. These changes introduced new defaults that previous applications were not designed to meet. Apps that had once matched the platform visually would now stand apart if left unchanged.

The group understood this shift as a reset in how native interfaces were structured. The required updates did not involve new technical models, but they did require adaptation. Existing layouts, transitions, and input logic had to be re-evaluated against the updated system standard. The work ahead was procedural, but the implications were broad. What had previously been accepted as consistent would now need to be revised.

Making the Incomplete Visible

Toward the end of the discussion, the group focused on Liquid Glass. Several limitations were already visible. The material did not render reliably across background conditions. Interface tools could not replicate their effects. Performance expectations remained undefined. The system had been released without full implementation support.

There was agreement that the timing was intentional. By making the system visible before it was fully developed, Apple had opened it to feedback. The focus shifted toward its construction—how the interface handled distortion, how it moved in layered space, how it interacted with depth and light. These elements were being examined in detail, even though teams could not yet use them in production.

The release created a reference point. Designers were studying the system, discussing its constraints, and outlining what would need to change to make it usable. The attention arrived before the platform was ready to support it.

Closing Note

The discussion moved across features, systems, and design assumptions. Some updates were accepted immediately. Others were left open, noted for what they implied rather than what they resolved. The group focused less on what had been added and more on how the operating system now expected users—and designers—to think. The conversation did not end with a single opinion or conclusion. It outlined the kinds of decisions that would follow.

Related Articles

Dive deep into our research and insights. In our articles and blogs, we explore topics on design, how it relates to development, and impact of various trends to businesses.