May 26, 2023
Interface with AI in React Native
This article covers Ekaansh Arora's (Developer Advocate, Agora) talk on "Interface with AI in React Native," which he presented at the React Native Meetup held at GeekyAnts.
Author


Book a call
Table of Contents
Why Discuss Interface with AI in React Native?
Let us start with an example. While building an AI app in React Native, you have a button where you can add an event called onPress and call your API in the event. At first glance, this seems fairly correct. But there is a key problem — you are exposing your API key in your application bundle

People would like to be quick to assume that they can just like put it in my environment variables, and that is fine. Well, not really.
The problem is fairly common. Check out this tweet by Cyrill Zakka, MD. The thread talks about how almost half the apps on the App Store are in some way leaking their API credentials.

Even if you compile down your code and store it in your info.p list, you can still decompile your source code and access the API string.
But what if you encode it and store it in a different format? Well, that does not seem to be a solution — you can always do a string search no matter what you're doing and compromise the security. Even if you go through security documentation on Expo and use some libraries, it is likely to be futile. Pixellating sensitive data is also not a solution. There are multiple ways to get the data.

This is a problem that exists, as “anything of value that can be hacked will be hacked” — it is only a matter of time and motivation. So if there is something in your bundle that you are giving the user access to, it is likely to be hacked.
The Solution — Not Put Anything There
How can you steal what isn’t there? That is the overarching idea for the solution we are going to implement. So, it is time for the B-word — backend.
We will use TypeScript, Create T3 Turbo, Expo, and tRPC.

The solution is dropping a backend in the middle, which has your key and sort of signs the request. So your app only gives the prompt to the backend, it adds the key and sends it across to whatever service that you want. In this way, you sort of remove the attack vector from the app to the back end. That's fair, but your app source code is accessible, or at least your compiled app is accessible to millions of users. If you do things right, your backend shouldn't be accessible to anyone. So this is a much more secure process of doing things.

So as an example. When we create an application with AI using React Native, with the concepts we discussed, there is still an option to use the same concepts. There is some button that has some function and does accept requests. Then we're setting some state to show this on the screen. That's fairly standard React native stuff.
Now the idea is once you've set up these apps, there is something called a tRPC router. You can have keys in the router. These are like functions. So it's like defining a function that exists in your backend and calling a function that exists in the front end.

So there is a sort of API that's sort of abstracted away for you. And what we can do is, We can describe what input this function takes. So let's say this will accept something called a prompt and it'll be off a type string. And instead of just returning that input, what it will do is sort of take a code from the front end and drop it in the back end. We can trim the code and slice it so nothing bad can happen. Once that is done, we ensure that the code can return some response that front end can consume.
Accessing The Function on the Front End
You have something called API that you get from this library. It gives you this function called AskGPT. And this function sort of has something called a useMutation. So this is just a React query. The other processes are standard procedures.
That's how simple it is to write the backends. It's just defining functions that you can access on the front end. You don't have to do Kubernetes-like scale.
Deployment
So coming to Kubernetes and deployment. Since it's all serverless, there are no long servers. You can execute one command and have a backend deployed, nothing to worry about. And it's highly scalable. It is virtually limitless scale right from starting need.
The Future — What More We Can Do
An idea is to move your backend to the edge, which is something you can do fairly easily. And this becomes even faster and cheaper because there's no data access and you can call things. So this is both faster and since edge functions are incredibly cheap, you can get 500,000 execution units per month for free without having to pay for anything, which is great. If you're using something like Solito and NativeBase there are more advantages.
There are many possibilities. We just have to make cool things and inspire cooler things.
Check-out the full address by Ekaansh Arora here.
Related Articles.
More from the engineering frontline.
Dive deep into our research and insights on design, development, and the impact of various trends to businesses.

Apr 23, 2026
Your AI Works in the Demo. It Will Not Survive Production Without Preparation
Why AI prototypes fail before reaching production, and the six readiness factors that determine whether they scale successfully.

Apr 23, 2026
From Manual Testing to AI-Assisted Automation with Playwright Agents
This blog discusses the value of Playwright Agents in automating workflows. It provides a detailed description of setting up the system, as well as a breakdown of the Playwright Agent’s automation process.

Apr 21, 2026
How to Choose an AI Product Development Company for Enterprise-Grade Delivery
A practical guide for enterprises on how to choose the right AI development partner, avoid costly mistakes, and ensure long-term delivery success.

Apr 20, 2026
AI MVP Development Challenges: How to Overcome the Roadblocks to Production
80% of AI MVPs fail to reach production. Learn the real challenges and actionable strategies to scale your AI system for enterprise success.

Apr 17, 2026
How to Build an AI MVP That Can Scale to Enterprise Production
Most enterprise AI MVPs fail before production. See how to design scalable AI systems with the right architecture, data, and MLOps strategy.

Apr 17, 2026
How to De-Risk AI Product Investments Before Full-Scale Rollout
Most AI pilots never reach production, and the reasons are more preventable than teams realize. This blog walks through the warning signs, the safeguards, and what structured thinking before the build actually saves.