React Native Spectrum-Issue 2: Building for Performance, Scale, and Continuity
The second issue of React Native Spectrum examines the evolving React and React Native landscape — from real-time device tracking and Expo config plugins to iOS Live Activities, the shift to React Native’s New Architecture, and the release of gluestack v3. It captures how engineering teams are solving real-world performance, scalability, and workflow challenges that shape the next phase of cross-platform development.
React Native Spectrum-Issue 2: Building for Performance, Scale, and Continuity
From Concept to Scale: Building Real-Time Device Tracking
Modern applications promise seamless experiences, yet the path from initial concept to production-ready software reveals a complex landscape of technical challenges and strategic decisions. The development of a real-time device tracking platform offers valuable insights into how engineering teams navigate performance bottlenecks, architectural decisions, and the delicate balance between feature delivery and system stability.
What began as a straightforward mapping application evolved into a sophisticated platform capable of handling thousands of concurrent devices. The journey illuminates fundamental principles about software development, from the importance of early performance consideration to the strategic use of specialised tools for distinct problems.
The Foundation Challenge
The initial approach seemed logical: retrieve all devices from the API and display each as a unique marker on the map. This direct implementation served as a functional proof of concept, demonstrating that the core feature worked as intended. The development team had successfully created a baseline that proved the technology stack could handle the core mapping functionality.
When tested with realistic data volumes, the application's limitations surfaced immediately. The user interface thread struggled to manage hundreds of individual components simultaneously, resulting in sluggish navigation and frequent crashes, particularly on Android devices where memory constraints are less forgiving. This naive method directly tied the application's performance to the number of devices, creating a linear scaling problem that made the software unusable under production conditions.
The team faced a harsh reality check. What functioned perfectly in development environments with a handful of test devices became completely unresponsive when subjected to real-world data loads. The application would freeze during map interactions, crash when users attempted to zoom out to view large areas, and consume excessive memory that forced other applications to close.
React's Rendering Burden
The mapping interface revealed a fundamental React performance issue. Every minor interaction—panning, zooming, or parent component updates—triggered a complete re-rendering of every device marker, regardless of whether the underlying data had changed. This created unnecessary computational overhead that degraded the user experience significantly.
The problem extended beyond simple inefficiency. Each re-render cycle consumed precious memory and processing power, creating a cascading effect that slowed the entire application. Users experienced choppy animations, delayed responses to touch inputs, and frequent application freezes during basic navigation tasks.
The solution lay in React's memoisation capabilities. By implementing React.memo around marker components, the team prevented unnecessary re-renders when component properties remained unchanged. Supporting this approach, useCallback and useMemo ensured that function and object references maintained stability across render cycles.
Architectural Transformation Through Clustering
Memoisation addressed the re-rendering problem yet could not overcome the fundamental scalability limitation. Thousands of marker components in the UI tree continued to consume excessive memory, particularly when users viewed large geographical areas with high device density. The application still struggled with the sheer volume of elements, regardless of how efficiently they rendered.
The breakthrough came through clustering technology. Rather than rendering individual markers, the system now groups nearby devices into consolidated cluster icons. This approach dramatically reduces the number of visual elements the map must manage at any given time. As users zoom into specific areas, clusters intelligently expand to reveal smaller groups or individual devices.
Implementation required sophisticated algorithms to determine optimal grouping strategies. The system needed to balance visual clarity with performance gains, ensuring that clusters provided meaningful information while maintaining responsive interactions. Different zoom levels triggered different clustering behaviours, creating a dynamic visualisation that adapted to user needs.
This architectural shift represented the difference between symptom treatment and root cause resolution. While memoisation improved efficiency, clustering fundamentally altered the performance equation, enabling the platform to handle orders of magnitude more devices without degradation.
Layered Complexity and Context
Device locations provide valuable information, yet business context transforms data into actionable intelligence. The platform needed to support additional layers: Points of Interest representing customer sites and Geofences defining service areas. These features would enable users to understand not just where their assets are located, but how those locations relate to their business operations.
Implementing these layers required careful state management to ensure users could toggle different information types without performance penalties. Each layer operated independently, allowing selective visualisation based on user preferences.
The technical complexity extended beyond React components into native map rendering. Custom polygons and specialised icons introduced platform-specific challenges, particularly within Android's rendering engine. Significant debugging efforts ensured that all layers could coexist and interact smoothly across different devices and operating systems.
Real-Time Data Flow
Static snapshots have limited value for operational applications. True live tracking requires continuous data updates that transform the map from a reporting tool into a dynamic monitoring platform. The core promise of the application is centred on providing users with current device locations, making automated data refresh essential rather than optional.
The initial polling system used a straightforward setInterval approach, fetching updated device locations every 30 seconds from the backend servers. This new data was then fed into the central state management store, which automatically triggered updates to map markers as fresh information arrived. The implementation created a background service that operated independently of user interactions, ensuring that device positions remained current without requiring manual refresh actions from users.
Intelligent Resource Management
Constant polling, while functional, created unnecessary resource consumption. The system drained device batteries and consumed mobile data regardless of whether users were actively viewing the map. Additionally, users often encountered stale data when first navigating to the mapping interface, creating a suboptimal initial experience.
The solution involved coupling polling behaviour with application navigation state. Data fetching now activates only when users access the map screen and pauses immediately when they navigate elsewhere. Furthermore, an immediate data fetch triggers whenever the map comes into focus, ensuring users always see current information.
This intelligent polling approach significantly improved resource efficiency while enhancing user experience. The implementation required careful handling of state transitions and UI updates to prevent the bugs that often accompany such optimisation efforts.
Strategic Insights for Technical Leadership
The development process revealed several principles applicable beyond mapping applications. Performance considerations must influence architectural decisions from project inception rather than being addressed as afterthoughts. When applications fail under realistic data loads, the root cause typically lies in fundamental design choices rather than implementation details.
The distinction between quick fixes and architectural solutions proves crucial for long-term success. While memoisation provided immediate relief, clustering addressed the underlying scalability challenge. Technical leaders must develop the ability to identify when symptoms indicate deeper structural issues requiring more comprehensive solutions.
Bridging Native and Managed: Config Plugins in Expo Development
Mobile app development often presents a fundamental tension between convenience and control. Expo's managed workflow offers remarkable developer productivity, abstracting away native complexities while maintaining cross-platform compatibility. Yet powerful third-party libraries frequently demand direct access to native configuration files, creating an apparent impasse for teams committed to managed workflows.
Config plugins emerge as the solution to this dilemma, providing automated native file modification without sacrificing the benefits of managed development. These specialised functions transform how developers integrate complex libraries while preserving reproducible build processes.
The Architecture of Automation
Config plugins function as synchronous JavaScript functions referenced in the plugins array of an Expo application configuration file. Each plugin receives an Expo config object and returns an enhanced version after applying platform-specific customisations. These modifications might include new permissions, API keys, or changes to underlying native files like AndroidManifest.xml or Info.plist.
The plugin system operates through a sophisticated abstraction layer. Rather than directly manipulating native files, plugins utilise mods—specialised modifier functions that safely interact with native project files as structured data. Common mod functions, like withAndroidManifest and withInfoPlist, provide secure APIs for modifying native configurations without risking file corruption or syntax errors.
This architecture ensures that all native modifications occur during the prebuild phase, before code compilation begins. The timing proves crucial, as it allows the system to validate changes and maintain file integrity throughout the build process.
Solving Real-World Integration Challenges
The practical value of config plugins becomes evident when integrating libraries that demand native configuration. Consider the challenge of incorporating Optimove for notifications, Pusher for real-time communication, and Veriff for identity verification into an Expo application. Each library requires specific native file modifications that would traditionally force developers to eject from the managed workflow.
Pusher integration demands permissions and service hooks across AndroidManifest.xml, Info.plist, and various native files. Veriff requires custom permissions and SDK linking for both iOS and Android platforms. Optimove needs permission configurations alongside service files and extensions. Without config plugins, developers face manual configuration across multiple platforms, introducing opportunities for human error and inconsistency.
Custom config plugins automate these integrations entirely. Each plugin encapsulates the specific native changes required for its corresponding library, applying modifications consistently across development, staging, and production builds. The approach maintains full compatibility with Expo's automated build systems while enabling access to powerful native functionality.
Implementation and Developer Experience
Writing effective config plugins requires understanding both the target library's native requirements and Expo's modification APIs. Plugins typically combine multiple mod function calls, each addressing specific native file requirements. The development process involves identifying required native changes, translating them into appropriate mod function calls, and ensuring proper error handling for edge cases.
The plugin system provides both safety and flexibility through its layered approach. Standard mod functions handle common scenarios safely, while dangerous mods allow direct file system operations for complex requirements. This graduated access model enables developers to choose appropriate abstraction levels based on their specific needs.
Strategic Advantages for Development Teams
Config plugins deliver significant organisational benefits beyond technical functionality. Native configurations become version-controlled assets, ensuring reproducibility across team environments and deployment pipelines. Automation eliminates manual setup procedures, reducing onboarding time for new team members and minimising configuration drift between environments.
The plugin approach also enables code reusability across projects. Well-designed plugins can be packaged and shared, creating internal libraries of native integrations that accelerate future development cycles. Teams can build institutional knowledge around complex native integrations while maintaining the productivity benefits of managed workflows.
Future-Proofing Development Workflows
Config plugins represent a fundamental shift in how mobile app development teams approach native integration. By centralising native modifications within automated, version-controlled processes, they eliminate many traditional pain points associated with cross-platform development. The approach proves particularly valuable for teams practising Continuous Native Generation, where native directories can be regenerated without losing manual changes.
The plugin system enables teams to access the full spectrum of native capabilities while preserving the developer experience advantages that drew them to managed workflows initially. This balance positions organisations to adopt powerful new libraries and platform features without architectural compromises or workflow disruptions.
Beyond Push Notifications: iOS Live Activities in React Native
Traditional push notifications suffer from a fundamental limitation: they disappear once acknowledged, leaving users to repeatedly check applications for status updates. iOS Live Activities address this gap by providing persistent, real-time information directly on the Lock Screen and Dynamic Island, transforming how users interact with time-sensitive data.
For React Native developers, implementing Live Activities presents both significant opportunities and technical challenges. The technology promises 40% higher engagement rates than conventional notifications while enabling new revenue streams through enhanced user interaction. However, success requires bridging native iOS capabilities with cross-platform development workflows.
Real-Time Presence Architecture
Live Activities operate through a sophisticated architecture that maintains persistent visibility of critical information. The Dynamic Island on iPhone 14 Pro models provides three distinct presentation modes: minimal displays for simple indicators, compact layouts featuring leading and trailing elements, and expanded views with full interactive capabilities. This graduated presentation system ensures information remains accessible without overwhelming the interface.
The underlying technology relies on ActivityKit and WidgetKit frameworks, which manage state updates and UI rendering independently of the main application. This separation enables Live Activities to remain current even when the host application is terminated, creating truly persistent monitoring experiences.
Integration Complexity and Solutions
React Native implementation requires careful coordination between JavaScript logic and native iOS components. The process begins with creating a Widget Extension target within Xcode, followed by configuring ActivityAttributes structures that define both static data and dynamic state properties. Flight tracking applications, for example, maintain fixed information like airline and route while updating status, gate assignments, and timing data.
struct FlightActivityAttributes: ActivityAttributes {
public struct ContentState: Codable, Hashable {
var status: String // "Boarding", "Delayed", "On Time"
var gate: String // "A12", "B7", "TBD"
var countdown: Date // Departure time
}
var flightNumber: String // "AI 101"
var route: String // "DEL → BOM"
var airline: String // "Air India"
}
The native module bridge becomes crucial for enabling JavaScript control over Live Activities. Swift implementations handle Activity lifecycle management, token monitoring for push notification integration, and state updates triggered by React Native components. This architecture requires developers to write substantial native code while maintaining cross-platform compatibility.
Critical configuration steps include enabling Live Activities in Info.plist, configuring push notification capabilities, and establishing proper build phase settings. The NSSupportsLiveActivitiesFrequentUpdates key proves essential for applications requiring rapid updates, such as transportation or financial monitoring systems.
Push Notification Integration Strategy
Live Activities achieve their persistent nature through Apple Push Notification Service integration. Each activity receives a unique push token that enables backend systems to deliver updates directly to the Lock Screen interface. This approach eliminates the need for background processing while ensuring updates arrive even when applications are completely terminated.
Backend implementation requires specialised APNs payload structures that target Live Activity tokens rather than device tokens:
{
"aps": {
"timestamp": 1672531200,
"event": "update",
"content-state": {
"status": "Boarding",
"gate": "A15",
"countdown": 1672534800
},
"alert": {
"title": "Flight AI 101",
"body": "Now boarding at Gate A15"
}
}
}
The system supports both update operations for existing activities and Push-to-Start functionality introduced in iOS 17.2, which enables server-initiated activity creation. This capability proves particularly valuable for flight tracking applications that can automatically begin monitoring based on booking confirmations or schedule changes.
The push notification architecture also enables interactive elements within Live Activities. Users can trigger deep links to specific application screens or execute background actions without fully launching the application. Implementation requires AppIntents integration that handles user interactions and coordinates with React Native navigation systems.
Business Impact and User Engagement
Live Activities create measurable business advantages beyond improved user experience. Flight tracking applications report a 60% reduction in abandonment during delays and 50% higher session duration when Live Activities remain active. The persistent visibility encourages continued engagement during critical travel moments when users previously might have switched to competitor applications.
React Native integration requires a service layer to manage Live Activities from JavaScript:
class FlightActivityService {
private currentActivityId: string | null = null;
async startFlightTracking(flightData: FlightActivityData): Promise<void> {
if (Platform.OS !== 'ios') return;
this.tokenListener = activityEmitter.addListener('onTokenUpdate', (data) => {
this.currentActivityId = data.activityId;
this.registerTokenWithBackend(data.token, data.activityId);
});
}
async updateFlightStatus(status: string, gate: string): Promise<void> {
if (!this.currentActivityId) return;
await FlightActivityNative.updateFlight(this.currentActivityId, status, gate);
}
}
Revenue generation opportunities emerge through the strategic placement of interactive elements within Live Activities. Transportation applications can promote ancillary services like seat upgrades or ground transportation directly within the Lock Screen interface. Early implementations show 30% increases in ancillary bookings and 45% higher click-through rates on cross-sell offers.
The technology also reduces operational costs through decreased customer support interactions. When users receive real-time updates about gate changes or delays through Live Activities, support query volume drops by approximately 20%. This reduction represents significant cost savings for customer-facing applications in transportation, delivery, and financial sectors.
Implementation Strategy for Development Teams
Successful Live Activities implementation requires balancing technical complexity with user value delivery. Development teams should prioritise essential information display over feature completeness, focusing on data that users genuinely need during critical moments. The Lock Screen real estate is limited and valuable, making information hierarchy decisions crucial for user adoption.
The native module bridge implementation demonstrates the coordination required between platforms:
@objc(LiveActivityModule)
class LiveActivityModule: RCTEventEmitter {
private var currentActivity: Activity<FlightActivityAttributes>?
private func startFlightActivity() {
let attributes = FlightActivityAttributes(
flightNumber: "AI 101",
route: "DEL → BOM",
airline: "Air India"
)
currentActivity = try Activity.request(
attributes: attributes,
contentState: initialState,
pushType: .token
)
}
}
Technical reliability becomes paramount due to the persistent nature of Live Activities. Users develop strong expectations around accuracy and timeliness when information remains visible on their Lock Screen. Robust error handling, graceful network failure recovery, and intelligent update throttling prevent user frustration and maintain application credibility.
The investment in Live Activities technology positions applications at the forefront of mobile user experience evolution. As iOS continues expanding platform capabilities, early implementations establish competitive advantages while creating new opportunities for user engagement and revenue generation in increasingly crowded application marketplaces.
The Critical Connection: From Local yalc to Safe Previews
The growth of gluestack-ui brought with it a challenge common to all open component libraries. Contributions are vital, but every change carries the risk of destabilising projects that depend on it. A system was needed that could allow contributors to participate freely while ensuring that maintainers had the tools to evaluate changes safely and rigorously.
The solution took the form of a carefully designed development and preview workflow. It redefined how contributions are tested, reviewed, and merged, replacing lightweight demonstrations with a process that reflects the realities of production environments.
Moving Beyond Storybook
Storybook had served as the early tool for testing. It offered isolated examples, quick to set up and easy to share, but it lacked depth. Components worked in isolation yet failed when placed inside real projects. Integration bugs, performance bottlenecks, and platform-specific behaviours slipped through.
The team decided that validation needed to happen in real applications. Next.js and Expo became the new testing grounds. By working directly within these environments, contributors could expose the subtle issues that Storybook could not reveal. Reviewers, in turn, gained confidence that the contributions under review would behave as expected in the field.
Local Development with yalc
With the stage set for real-application testing, the next step was to equip contributors with a reliable workflow. The traditional npm link approach proved fragile, especially in React Native projects where dependency resolution can be sensitive. The team adopted yalc, a tool that simulates a local npm registry.
When contributors published with yalc, their packages were copied into a local store. Installing them into Next.js and Expo test apps replicated the experience of using a package from npm, but without the overhead of public publishing. Dependencies resolved as they would in production, and contributors avoided the pitfalls of symlinks and broken configurations.
The workflow was simple. Contributors built their package with a single command, linked it into test applications, and watched changes propagate through hot reload. This eliminated the delays of repeated builds and provided immediate feedback across both web and mobile environments. It turned local development into a smooth rehearsal for production.
From Local Testing to Preview Builds
Local testing gave contributors confidence, but maintainers and reviewers required more. They needed to see the actual build artefacts that would eventually be published. Local packages do not exist on npm, and public packages lag behind contributor changes. Without a bridge, reviewers would never be able to test the real outputs of a pull request.
The bridge was an automated pipeline built with GitHub Actions. Every pull request triggered a workflow that identified changed packages, built them, and published them to npm under unique preview versions. These versions included a pull request number, a commit hash, and a timestamp. They were traceable, temporary, and separate from official releases.
Once published, the workflow verified availability on npm and then triggered deployments. Next.js builds were generated on Vercel, while Expo produced preview builds for React Native. Reviewers had immediate access to live applications running the new packages, as well as installable versions they could test locally.
Safe Deployment for Review
The preview workflow created a safe layer between contributions and production. Packages released under preview tags could never overwrite stable versions, yet they were fully functional. This meant reviewers interacted with the same build outputs that would eventually reach users.
Vercel and Expo provided the environment to test those artefacts. Reviewers no longer needed to replicate contributor setups or rely on screenshots. They opened live builds, explored interactions, and validated performance in real time. The experience mirrored what developers in production would encounter, raising confidence in every decision to approve or request changes.
Verification Before Production
The system did not stop at previews. Once contributions passed review, and only then, they were merged into production. This safeguard ensured that unverified changes never touched the main branch or the packages consumed by end users.
The two-stage process of local testing and preview publishing addressed a long-standing limitation in open-source workflows. Contributors lacked permission to publish official packages, and for good reason. Yet reviewers could not perform their role effectively without testing real builds. The preview system provided the missing piece. It turned every pull request into a set of reviewable, traceable packages that disappeared once the work was complete.
Why yalc Worked
The choice of yalc proved essential to the contribution workflow. It allowed contributors to publish packages into a local store that acted like a private registry, ensuring their changes behaved as though they had been pulled from npm. This eliminated the dependency resolution conflicts and broken symlink issues that commonly appear when using npm link, particularly in React Native projects.
Equally important was the consistency it brought across platforms. Contributors could test their updates in both Expo and Next.js environments without extra configuration, and every change reflected the behaviour expected in production. This stability reinforced the reliability of the overall system and freed contributors to focus on improving components rather than troubleshooting development tools.
Building a Culture of Safety
The process reflects more than technical adjustments. It represents a culture of safety in open-source collaboration. Contributors can experiment and iterate without fear of breaking downstream projects. Reviewers have tools that let them evaluate changes in conditions that reflect reality. Maintainers can merge with the assurance that the integrity of the library is protected.
The development and preview system achieves balance. It combines the openness of community-driven work with the rigour expected of a production-ready project. Packages are tested locally, reviewed as published previews, and only then merged. Each stage reinforces the others, creating a chain of trust that extends from the contributor’s first commit to the library installed in a production application.
Conclusion
The system built for gluestack-ui shows how contribution workflows can evolve to meet the demands of modern development. It acknowledges the limitations of traditional testing and replaces them with a process that is both practical and reliable.
For contributors, it provides clarity and speed. For reviewers, it delivers real, testable builds. For maintainers, it creates a safety net that ensures production stability. Together, these elements protect the integrity of the library while encouraging the community to contribute.
In the broader React Native ecosystem, this model has relevance for any project that balances openness with reliability. By integrating local testing, automated previews, and strict verification, gluestack-ui has demonstrated how a project can invite collaboration without compromising trust.
Measuring Speed: React Native’s Shift to a New Architecture
Performance is the measure that decides whether an application earns a place on a user’s device or is abandoned after the first encounter. A single second of delay can reduce engagement, cut conversions, and push people away before they finish a task. Even a hundred milliseconds has measurable effects on sales for large e-commerce platforms.
For developers working with React Native, these numbers underline the urgency of treating performance as a core design principle. It is not enough for an application to work; it must feel responsive, fluid, and dependable across devices. The New Architecture of React Native introduces tools and techniques that make this goal achievable at scale.
Why Performance Matters
The evidence is difficult to ignore. Thirty-eight percent of users abandon applications that feel sluggish. Thirty-four percent stop midway through tasks that take longer than expected. A one-second delay can result in eleven percent fewer page views and a seven percent drop in conversions. These statistics make clear that performance is not an additional feature layered onto an app. It is a foundation that shapes whether users continue to engage at all.
To address performance, developers need to understand the benchmarks that measure success. Time to interactivity, time to specific screens, and time to the home screen are all tracked to ensure responsiveness. Smoothness of response is measured by frames per second, with 60 FPS considered the baseline. As devices grow more powerful, users increasingly expect apps to respond at 90 FPS or even 120 FPS.
Optimizing Code
The journey to better performance begins at the code level. React Native provides a profiler that highlights component renders and identifies unnecessary updates. By recording interactions and examining which parts of the UI re-render, developers can identify inefficiencies that degrade responsiveness.
A simple example demonstrates the point. Consider a component with two text elements and a button. In the initial implementation, pressing the button re-rendered the entire screen. Profiling revealed that both text components were refreshed even though only one was affected. The solution was to restructure the logic, splitting the texts into separate components and applying React.memo to the button. With this change, only the relevant component is re-rendered during interaction, reducing wasted computation and improving responsiveness.
Understanding the New Architecture
While code optimisation is essential, React Native’s New Architecture provides deeper improvements. It introduces three core capabilities that reshape performance: synchronous layout and effects, a new JavaScript Interface, and concurrent rendering.
The first of these, synchronous layout and effects, resolves a longstanding limitation. In older architectures, view positions could not be updated in the same commit as their measurements, creating inconsistencies and visual jumps. The New Architecture ensures that layout updates and measurements occur together, removing intermediate steps and eliminating visual misalignment.
The second improvement is the replacement of the traditional bridge with JSI, or JavaScript Interface. This new mechanism allows JavaScript and native code to communicate directly, supported by TurboModules. The result is faster data transfer, reduced latency, and a smoother connection between JavaScript logic and native operations.
The third capability is concurrent rendering. This mechanism focuses on the perception of speed. By interrupting long rendering processes and batching updates, concurrent rendering ensures that the interface remains responsive to user input even while complex operations continue in the background.
Transitions and Deferred Values
Concurrent rendering introduces new tools for developers. Transitions, managed through the useTransition hook, allow applications to acknowledge delays gracefully. When a state update takes time, transitions keep the app responsive by marking the update as pending while still allowing other interactions to proceed.
Another tool is useDeferredValue, which focuses on specific data rather than entire screens. By deferring updates to a particular value, the application avoids blocking larger updates and continues to respond smoothly. Together, transitions and deferred values create interfaces that maintain responsiveness under heavy workloads, preserving the illusion of immediacy for users.
React also provides Suspense as a fallback system for loading states. When components are delayed, Suspense introduces placeholders or loaders that keep the user informed and engaged. These mechanisms reflect the shift toward architectures that prioritise perceived responsiveness as much as raw speed.
Benchmarking Improvements
The New Architecture has shown measurable benefits. Tests on devices such as the Google Pixel 4 and iPhone 12 Pro reveal reduced rendering times and smoother performance. The improvements are especially notable on iOS, where the newer hardware amplifies the advantages of synchronous layout, JSI, and concurrent rendering.
These benchmarks demonstrate the tangible impact of upgrading to the New Architecture. For teams still operating on older versions, the data makes the case for transition clear.
The React Compiler
React Compiler represents the next stage in performance optimisation. Still experimental, it automates many of the tasks that developers previously handled manually. Profiling shows that components can be memoised automatically, without the need for explicit React.memo calls. By analysing component dependencies and caching values intelligently, React Compiler reduces unnecessary renders and improves efficiency across the application.
The compiler does not replace the developer's responsibility. To benefit fully, teams must still follow the rules of React, ensuring that code is predictable and free of side effects. When applied correctly, the compiler allows developers to focus on business logic while the system enforces optimisation in the background.
Supporting Tools and Practices
Beyond the New Architecture and React Compiler, additional tools enhance performance. Libraries such as FlashList and LargeList offer efficient rendering for large data sets. For animations, Reanimated provides a high-performance option that keeps interactions fluid. For state management, lightweight atomic solutions like Jotai, Zustand, and Recoil reduce overhead and improve responsiveness compared to heavier global state systems.
Together, these tools create an ecosystem where performance is treated as a continuous discipline rather than a one-time adjustment.
Conclusion
The evolution of React Native reflects a growing emphasis on performance as the core of user experience. Profiling and memoisation provide a foundation, the New Architecture strengthens the connection between JavaScript and native code, and concurrent rendering keeps applications responsive under pressure. React Compiler and supporting libraries extend these benefits, reducing manual effort while expanding what developers can achieve.
For teams building modern mobile applications, performance must be planned and measured at every stage. The New Architecture offers the tools to make that possible, ensuring that users encounter applications that are fast, smooth, and reliable.
The Milestone That Signals a Shift
GeekyAnts released gluestack v3 on GitHub on August 4 and formally announced it on September 3, 2025, marking a new chapter in the effort to unify React, Next.js, and React Native components. The announcement coincided with crossing 100,000 downloads, a figure that represents more than numerical growth. It signals that developers are responding to a new way of thinking about how UI frameworks should work.
In a field dominated by Material UI, Chakra UI, and Ant Design, the milestone matters. Those frameworks have deep roots and wide adoption, yet their model of dependency-heavy libraries has left gaps for teams who want control and adaptability. The milestone achieved by gluestack suggests the search for alternatives is not marginal, but mainstream.
Designing with Ownership in Mind
The defining innovation of v3 is its source-to-destination architecture. Instead of scattering logic across layers, it keeps components in a single source of truth and syncs them across templates and examples. This decision is both technical and philosophical: it encourages developers to treat components as owned assets rather than abstract packages.
That philosophy extends to the copy-paste model. Developers take only what they need, drop it into the project, and immediately shape it to their requirements. The result is faster iteration and fewer compromises, as teams avoid waiting on maintainers or building workarounds to meet design goals.
The modular design reinforces this ownership. Instead of pulling in dozens of unused elements, developers select precisely the components required. Combined with Tailwind and NativeWind integration, the approach allows customisation without sacrificing performance or loading unnecessary code.
Breaking from the Library Mould
Most traditional UI libraries bundle entire design systems. They impose dependencies and conventions that lock teams into opinionated patterns, often bloating projects with features that are never used in production. For developers seeking both performance and flexibility, these libraries offer limited room to manoeuvre.
With gluestack v3, that mould is broken. Its copy-paste model gives full control over code, while universal compatibility ensures the same patterns work across React, Next.js, and React Native. Accessibility, modularity, and performance optimisations combine to position it as a lighter, more adaptable alternative.
A Framework Shaped in Public
The milestone of 100,000 downloads is meaningful, but what strengthens its significance is the way gluestack has grown in public view. Developers are not just downloading components but shaping them, refining documentation, and offering feedback in real time.
GitHub contributions provide a clearer picture of adoption. Active pull requests, discussion threads, and refinements show a community willing to invest energy back into the framework. That depth of involvement is a stronger validation than numbers alone, pointing to a foundation of shared ownership.
The pattern suggests more than popularity. It indicates a framework evolving in response to genuine needs, with improvements flowing directly from usage in real projects. That feedback loop is what transforms an emerging library into a trusted part of the development ecosystem.
From Transition to Trajectory
Upgrading from v2 to v3 has been designed with care. APIs remain intact, theming systems are consistent, and breaking changes are minimised. This planning acknowledges the risk of fragmentation and keeps teams confident that adoption will not create disruption.
Alongside this stability, GeekyAnts has outlined a roadmap that balances expansion with focus. Additions like date-time pickers and bottom sheets are planned, as well as performance optimisations through bundler plugins and tree flattening. The path forward emphasises systematic growth over feature overload.
What This Means for UI’s Future
The milestone marks more than a successful release. It suggests that developers are prioritising flexibility, ownership, and modularity over the convenience of all-in-one systems. The appetite for frameworks that hand control back to teams is stronger than many assumed.
“The release of gluestack v3 is about giving developers the freedom to own their components while ensuring consistency across platforms,” said Sanket Sahu, Co-Founder and CEO of GeekyAnts. “The 100,000-download milestone shows that this philosophy resonates with the community.”
Whether this philosophy becomes the standard or remains a specialised approach will depend on how it scales across production use cases. But the milestone makes one fact clear: what began as an alternative experiment is now a visible force. The release of gluestack v3 shows that the rules of component libraries are open to change, and the community is ready to explore new ground.
React, After the Announcements
In the space of two months, the React community gathered twice to decide what comes next. The sessions moved beyond announcements and turned toward maintenance, scale, and durability. React now carries the responsibility of a platform that underpins global products, and the tone of both events reflected that maturity. Speakers spoke about frameworks as living systems that require governance, not just innovation, and about tools that should serve the long run of software rather than the moment of release.
Each update presented during these conferences formed part of a larger structure. The Compiler reached stability, React Native continued its evolution toward the New Architecture, and the React Foundation began its work of formal stewardship. Together, they created a technical and organisational framework meant to last. The community did not meet to celebrate progress but to define continuity.
Architecture and Direction
The React Compiler became the focus of technical discussion. Years of experimentation have produced a tool able to analyse component behaviour at compile time. It applies automatic memoisation that reduces the manual work developers once did to prevent unnecessary re-renders. Performance now follows from structure rather than intervention, and the result is consistency across large codebases.
React Native continued along its planned path. The framework now operates entirely within the New Architecture, supported by Hermes as the default engine. This configuration standardises the build environment and removes ambiguity from deployment. Teams gain a single, predictable target for optimisation and testing.
Several smaller additions reinforced this direction. Activity and Effect Events refine how components respond to lifecycle and user input, while Partial Pre-Rendering improves perceived speed by delivering key interface sections early. Each change strengthens responsiveness without redesigning the underlying system.
Taken together, these updates reflect a framework that is settling into structure. React and React Native share more common ground than ever, and the work now centres on maintaining reliability rather than seeking novelty.
Engineering Reality
For web teams, the compiler introduces a methodical path toward higher performance. It can be enabled in selected areas, tested through lint rules, and tracked with performance metrics. The process is incremental and verifiable. Each stage provides confirmation that behaviour remains consistent while efficiency improves across builds and render cycles. The gain comes from structure rather than intervention, allowing performance work to become part of everyday development.
In React Native, the next phase begins with version 0.82. Teams are auditing native modules, checking Hermes compatibility, and benchmarking existing builds. The New Architecture standardises the environment and brings coherence to previously fragmented projects. Updated tools for profiling and monitoring expose how rendering and interaction costs move through the system. The shift rewards care and attention to detail, turning migration into an exercise in refinement rather than repair.
Rendering remains a shared focus. Libraries such as FlashList and LegendList define how data-heavy screens behave on modern devices. Each balances memory use, scroll behaviour, and layout precision in distinct ways. Evaluating them is part of the ongoing work of measuring structure, verifying design choices, and treating performance as an architectural quality rather than an afterthought.
Governance and Stability
Among the quieter announcements was the establishment of the React Foundation. Formed within the Linux Foundation, it brings together Amazon, Meta, Microsoft, Expo, Callstack, Software Mansion, and Vercel. The foundation creates a shared framework for stewardship, decision-making, and documentation. Its mandate is practical: to maintain transparency, ensure funding continuity, and provide a structure that reflects the scale React has reached.
The formation of this body changes the foundation of trust around the technology itself. Companies that depend on React can now plan against an institutional framework rather than a single corporate roadmap. Release cycles, event organisation, and licensing all operate under collective oversight. This governance structure transforms React from a product into an infrastructure that can support long-term commitments.
For developers and clients, this brings steadiness. Progress becomes visible through public discussion and recorded consensus. Planning horizons extend, and adoption decisions can rest on documented continuity rather than assumptions. The framework is no longer defined only by its codebase but by the durability of the structure that surrounds it.
Product Vignette
A design platform shared between web and mobile clients offers a clear view of how these changes converge. The web application processes extensive data tables, while the mobile counterpart focuses on gesture-heavy interactions. Both rely on the same design system and often mirror each other’s constraints. The team introduces the React Compiler on specific web routes and prepares its mobile application for the New Architecture rollout.
After one development cycle, results begin to appear in measurable terms. The compiler reduces redundant rendering on the web, while the mobile build gains speed through the optimised bridge and Hermes improvements. Interface transitions become smoother, and load times shorten across devices. These outcomes arise not from redesign but from the steady application of updated tools and measured validation. The improvement feels embedded rather than added.
The Decisions Ahead
The evolution of React this year defines a sequence of practical steps. Web teams are beginning controlled compiler trials within selected features, establishing clear baselines and collecting consistent metrics. Native teams are preparing for version 0.82, verifying Hermes configurations, and aligning internal documentation with the New Architecture. The focus across both groups is on stability before expansion, on maintaining confidence as systems evolve.
The immediate priorities are clear. Enable the compiler within a contained scope and observe behaviour over time. Catalogue native modules for compatibility and plan incremental upgrades. Evaluate rendering libraries under production data conditions. Create an internal group responsible for monitoring foundation updates and governance changes. These steps convert public announcements into operational progress and turn a broad roadmap into accountable work.
FAQ: React Native, gluestack & Modern UI Workflows
1. How can React Native applications scale effectively for real-time use cases like device tracking?
Scalability begins with optimising rendering and architecture. Techniques such as memoisation and clustering help reduce UI overhead when managing thousands of elements. Memoisation prevents unnecessary re-renders, while clustering groups of visual elements to maintain performance without losing context. Intelligent resource management—like conditional polling—ensures efficiency under real-world data loads.
2. What are the biggest challenges of integrating Live Activities in React Native?
Live Activities require bridging between JavaScript and iOS native modules. Developers must create Widget Extensions, configure ActivityAttributes, and manage push tokens. It involves careful coordination between React Native logic and Swift implementations, ensuring smooth updates, frequent refresh support, and robust error handling.
3. How does yalc improve the contribution workflow for component libraries like gluestack-ui?
yalc acts like a local npm registry, allowing contributors to publish and test packages locally in real applications (Next.js or Expo) without public releases. It eliminates dependency issues common with npm link, provides production-like testing conditions, and accelerates feedback loops between contributors and maintainers.
4. How is React Native’s New Architecture improving app performance?
The New Architecture introduces JSI (JavaScript Interface) for faster native-JS communication, synchronous layout for visual stability, and concurrent rendering for perceived responsiveness. These improvements reduce rendering time, enhance frame rates, and make apps feel more fluid under heavy workloads.
5. How does gluestack v3 differ from traditional UI libraries like MUI or Chakra?
Unlike dependency-heavy frameworks, gluestack v3 uses a copy-paste architecture that lets developers fully own and customise their components. It’s modular, works across React, Next.js, and React Native, and is optimised for performance. This flexibility is driving its rapid adoption and community contributions.
Never Miss a Release. Get the Latest Issues on Email
*By submitting this form, you agree to receiving communication from GeekyAnts through email as per our Privacy Policy.

Other Issues
Explore past issues of the GeekChronicles from the archives.
Let's Innovate. Collaborate. Build. Your Product Together!
Get a free discovery session and consulting to start your project today.
LET'S TALK