Integrating Google Gemini AI into SwiftUI Applications
Written on
Chapter 1: Introduction to Google Gemini AI
This guide will walk you through the process of incorporating Google Gemini AI into a SwiftUI application.
At the upcoming WWDC, Apple is anticipated to unveil an on-device large language model (LLM). The forthcoming version of the iOS SDK is likely to simplify AI feature integration for developers. While we await Apple's introduction of its own Generative AI models, platforms like OpenAI and Google are already offering SDKs for iOS developers to embed AI functionalities into mobile applications. In this tutorial, we will focus on Google Gemini, previously known as Bard, and illustrate how to utilize its API to create a basic SwiftUI app.
We aim to develop a question-and-answer application leveraging the Gemini API. This app will have a simple user interface that includes a text field for users to enter their questions. In the background, the user's query will be sent to Google Gemini to fetch the corresponding answer.
Before you begin, ensure you have Xcode 15 or newer to follow along with this tutorial.
Section 1.1: Getting Started with Google Gemini APIs
If you're new to Gemini, the first step is to obtain an API key for accessing the Gemini APIs. To generate one, visit Google AI Studio and click on the "Create API Key" button.
Section 1.2: Using Gemini APIs in Swift Applications
Now that you have your API key, it's time to use it in your Xcode project. Open Xcode and create a new SwiftUI project, which we'll name GeminiDemo. To securely store your API key, create a property file titled GeneratedAI-Info.plist. In this file, set a key named API_KEY and enter your API key as its value.
To retrieve the API key from the property file, create another Swift file called APIKey.swift. Add the following code to this file:
enum APIKey {
// Fetch the API key from GenerativeAI-Info.plist
static var default: String {
guard let filePath = Bundle.main.path(forResource: "GenerativeAI-Info", ofType: "plist") else {
fatalError("Couldn't locate file 'GenerativeAI-Info.plist'.")}
let plist = NSDictionary(contentsOfFile: filePath)
guard let value = plist?.object(forKey: "API_KEY") as? String else {
fatalError("Couldn't find key 'API_KEY' in 'GenerativeAI-Info.plist'.")}
if value.starts(with: "_") {
}
return value
}
}
If you choose a different name for your property file instead of 'GenerativeAI-Info.plist', you will need to update the code in 'APIKey.swift' accordingly to ensure the API key is correctly retrieved.
Subsection 1.2.1: Adding the SDK via Swift Package
The Google Gemini SDK can be easily added as a Swift Package. To do this, right-click the project folder in the project navigator and select "Add Package Dependencies." In the dialog box, enter the following package URL:
Then, click the "Add Package" button to download and include the GoogleGenerativeAI package in your project.
Chapter 2: Building the Application's User Interface
Now, let’s focus on the UI. The design is simple, featuring a text field for user input and a label for displaying responses from Google Gemini.
Open ContentView.swift and declare the following properties:
@State private var textInput = ""
@State private var response: LocalizedStringKey = "Hello! How can I assist you today?"
@State private var isThinking = false
The textInput variable captures user input from the text field, while the response variable shows the returned response from the API. To account for the API's response time, we include an isThinking variable to indicate when the app is processing the request.
Update the body variable with this code to construct the user interface:
VStack(alignment: .leading) {
ScrollView {
VStack {
Text(response)
.font(.system(.title, design: .rounded, weight: .medium))
.opacity(isThinking ? 0.2 : 1.0)
}
}
.contentMargins(.horizontal, 15, for: .scrollContent)
Spacer()
HStack {
TextField("Type your message here", text: $textInput)
.textFieldStyle(.plain)
.padding()
.background(Color(.systemGray6))
.clipShape(RoundedRectangle(cornerRadius: 20))
}
.padding(.horizontal)
}
This code is straightforward if you have some familiarity with SwiftUI. After making these changes, you should see the following user interface in the preview.
Section 2.1: Integrating with Google Gemini
To utilize the Google Gemini APIs, you first need to import the GoogleGenerativeAI module:
import GoogleGenerativeAI
Next, declare a model variable and initialize the Generative model as follows:
let model = GenerativeModel(name: "gemini-pro", apiKey: APIKey.default)
Here, we are using the gemini-pro model, which is specifically designed to generate text from input text.
To send the text to Google Gemini, let’s create a function named sendMessage():
func sendMessage() {
response = "Thinking..."
withAnimation(.easeInOut(duration: 0.6).repeatForever(autoreverses: true)) {
isThinking.toggle()}
Task {
do {
let generatedResponse = try await model.generateContent(textInput)
guard let text = generatedResponse.text else {
textInput = "Sorry, Gemini encountered an issue.nPlease try again later."
return
}
textInput = ""
response = LocalizedStringKey(text)
isThinking.toggle()
} catch {
response = "An error occurred!n(error.localizedDescription)"}
}
}
With this code, you only need to call the generateContent method of the model to send text and receive the generated response. The result is formatted in Markdown, so we use LocalizedStringKey to encapsulate the returned text.
To trigger the sendMessage() function, update the TextField view and attach the onSubmit modifier as follows:
TextField("Type your message here", text: $textInput)
.textFieldStyle(.plain)
.padding()
.background(Color(.systemGray6))
.clipShape(RoundedRectangle(cornerRadius: 20))
.onSubmit {
sendMessage()}
Now, when the user finishes typing and presses the return key, the sendMessage() function will be called to submit the text to Google Gemini.
Chapter 3: Conclusion
This tutorial provides a comprehensive guide on how to integrate Google Gemini AI into a SwiftUI application. It requires just a few lines of code to equip your app with Generative AI capabilities. In this demonstration, we utilize the gemini-pro model to generate responses based solely on text input.
However, Gemini AI's capabilities extend beyond text input. It also offers a multimodal model named gemini-pro-vision, allowing developers to input both text and images. We encourage you to take full advantage of this tutorial by modifying the provided code and experimenting with it.
If you have any questions or comments about this tutorial, please feel free to reach out below.
For those interested in learning Swift and UIKit, check out our book, Beginning iOS Programming with Swift.
Follow us on social media:
- Facebook: facebook.com/AppCodamobile/
- Twitter: twitter.com/AppCodaMobile
- Instagram: instagram.com/AppCodadotcom
If you found this article helpful, please click the 👏 button and share it to help others discover it! Feel free to leave your thoughts in the comments below.
In this video, learn how to build an AI app in iOS using SwiftUI and Gemini. Step through the process and discover key techniques!
This tutorial video walks you through implementing Gemini AI on iOS with SwiftUI, providing a hands-on approach to learning.