Using Generative AI To Curate Date Recommendations

The true power of Generative AI is realized when it helps people simplify or automate day to day activities and tasks. Some examples of these tasks include: email/message summarization, resume builders, and more. Specifically, this past week I was trying to plan an interesting date night for my girlfriend and I realized there was no set tool that could give an end to end date idea depending on both of our interests.
Sure I could use Google and stitch together a bunch of different places, but this took time and a lot of research (I also always end up on Reddit lol). Alternatively, I tried using something like ChatGPT directly, but realized a lot of the suggestions included places that were more outdated and didn't really contain the latest and greatest suggestions due to the model being trained at an earlier time.
In one of my previous articles, we discussed how we could generate music recommendations using LangChain Agents in conjunction with the Spotify API. Today we'll take not just a different API, but a slightly different approach than the Agent driven solution we followed in the music recommendation solution to curate date night ideas.
For this article, we'll utilize the Google Places API along with Amazon Bedrock Claude to power a date night recommendation app. We'll use Streamlit as a UI and allow for users to enter specific interests they have to help curate the experience. Now that we understand the general problem, let's get directly into our solution and building it out!
NOTE: This article assumes a basic understanding of Python, AWS, LLMs, and LangChain. A great starter article for LangChain can be found [here](https://aws.plainenglish.io/hosting-large-language-models-with-amazon-bedrock-95ebdc2b9c00). If you are new to Amazon Bedrock, please refer to my starter article here.
DISCLAIMER: I am a Machine Learning Architect at AWS and my opinions are my own.
Table of Contents
- Solution Overview
-
Solution a. Google Places API Setup b. Langchain & Bedrock Setup c. UI Setup & Demo
- Additional Resources & Conclusion
1. Solution Overview
To provide our Llm with real-time data access, we will be utilizing the Google Places API. With the Places API, we can pass in keywords that enable for relevant locations of that topic to be returned. We will use the API on the client-side and directly pass in the retrieved data as part of the prompt itself. Why do we not use Agents in this case? There's a few reasons:
- LangChain Agents are a powerful way of enabling LLMs to have real-time data access, but in this case we really only have one function for our LLM which is to return recommendations for the relevant topic we are querying for. While we can still use Agents in this use-case, there's a truly deeper value when there's multiple functions/purposes the Agent must reason between to determine the right course of action. In this case the sole functionality we have with the Places API is retrieving top locations.
- Bedrock Claude 2.1 which we use in this case, has a large Context Window of 200K tokens. For this use-case, we can directly fit in a lot of the data retrieved from the API into the prompt itself, which makes the solution much simpler rather than having to implement a custom LangChain Tool and define an Agent.
Now that we've discussed our reasoning for going this approach, we can understand what our solution will look like at a high level:

Let's get a quick understanding of the above stack that we will be using to implement this solution:
- Google Places API: Google offers a set of different APIs to interact with its Maps related services. We specifically explore the Google Places API, and utilize it to generate nearby locations for the activities/topics that our end user is requesting. Note that you have to create a Google Cloud account and a Google Places API (via a Google Cloud Project), setup instructions can be found here.
- LangChain: LangChain is a popular Python framework that helps simplify Generative AI applications by providing ready made modules that help with Prompt Engineering, RAG implementation, and LLM workflow orchestration. In this specific use-case we use LangChain to structure our Prompt to generate a better LLM response.
- Prompt Templates: LangChain offers Prompt Templates that essentially allow you to offer a certain structured input to your LLM. With Prompt Templates you can define different variables that will get injected into your Prompt. In this case we take the different user inputs, along with the locations returned by the Google Places API and inject them into our Prompt Template that we have defined.
- Bedrock Claude: We will utilize Claude via Amazon Bedrock as the LLM that is the brains behind this operation.
- Streamlit: Streamlit is an open-source Python library that makes it easy to build quick web applications. This is especially handy for non-Javascript masters such as myself and allows you to define pretty neat UIs with minimal Python code. In this case we will use it as our interface for users to share specific interests and will then display the LLM output.
2. Solution
a. Google Places API Setup
To get started, we will define a few helper functions that interface with the Google Places API. As stated earlier, please create a Google Cloud Account and Project to acquire the Places API keys. To work with the Places API we can utilize the Google Maps Python Library and instantiate our client:
import googlemaps # pip install if needed
# instantiate gmaps client
gmaps = googlemaps.Client(key='Enter API Key')
To work with the Places API for our use-case we need a few parameters:
- Location: This is where the user is located, for this example we just allow the user to pick between New York and San Francisco for the sake of the demo. In a real-world use-case adjust this to allow for any address.
- Radius: The distance from the location the user has defined, we cap this to anywhere between one to ten miles for the demo.
- Place/Keyword: This is the activity or place that the user is interested in, this can be something like "Thai Food" or "Comedy Clubs". These are the types of locations, that the Places API will look for. We define a few existing places that the user can choose from in the UI.
We first define a function that takes these parameters and returns the top locations and their addresses as well as their ratings:
def places_recommendations(location: dict, radius: float, place: str) -> list:
locations = []
places = gmaps.places_nearby(location=location, radius=radius, keyword=place)
for place in places['results']:
locations.append([{"Name": place['name'], "Address": place['vicinity'], "Rating": place['rating']}])
return locations
For the sake of our demo, we allow for users to enter a food that they like along with a secondary activity that they want for their date. We define another helper function that will curate recommendations for both options that the user has selected:
def curate_recommendations(location: str, distance: int, cuisine: str,
activity: str) -> tuple:
coordinates = gmaps.geocode(location)
loc_coordinates = coordinates[0]['geometry']['location']
radius = distance * 1609.34 #convert miles to meters for maps API
food_recs = places_recommendations(loc_coordinates, radius, cuisine)
activity_recs = places_recommendations(loc_coordinates, radius, activity)
return (food_recs, activity_recs)
Now that we have defined our methods to interact with the Google Places API, let's see how we can fit that retrieved data into our LangChain Prompt.
b. LangChain & Bedrock Setup
We first setup our Prompt Template utilizing the following LangChain import:
from langchain.prompts.prompt import PromptTemplate
For the Prompt Template, there's a few main parameters that we want entered that we already defined in our Google Places helper functions:
- User location
- Cuisine preference
- Top cuisine recommendations
- Secondary activities (Note we limit this to a few options for the demo)
- Top recommendations for chosen secondary activity
We take the Places API results and structure it into the following Prompt Template which captures all the above results.
# Prompt Template Setup
claude_template = """Human: Generate a date idea based off of the information that has been provided:
User/Requester Location: {location}
Cuisine/Restaurant: {cuisine}
Top locations for desired cuisine in location inputted: {food_recommendations}
Secondary interests/activites along with food for date: {activity}
Top locations for desired secondary activity in location inputted: {activity_recommendations}
Based off of the interests and recommendations provided and also using your existing knowledge of the location provided, give an end to end date idea.
Assistant:
"""
We then define the input variables we discussed above that will be injected:
date_prompt = PromptTemplate(
input_variables=["location", "cuisine", "food_recommendations", "activity", "activity_recommendations"],
template=claude_template
)
To process this input prompt we setup Bedrock Claude and specify the Model ID and type of payload that will be coming. This LLM will be the brains of our operation and utilize the structured information we've provided via the Prompt Template. We'll simplify this invocation on the UI by defining another helper method to invoke the Claude model:
import boto3
import json
# establish bedrock clients
client = boto3.client('bedrock')
runtime = boto3.client('bedrock-runtime')
model_id = 'anthropic.claude-v2'
accept = "application/json"
contentType = "application/json"
# helper method to invoke Bedrock Claude
def invoke_bedrock(input_prompt: str, model_id: str = 'anthropic.claude-v2',
accept: str = 'application/json', contentType: str = 'application/json') -> str:
body = json.dumps({"prompt": input_prompt, "max_tokens_to_sample": 500})
response = runtime.invoke_model(
body=body, modelId=model_id, accept=accept, contentType=contentType
)
response_body = json.loads(response.get("body").read())
output = response_body.get("completion")
return output
Now that our backend has been configured we can build the Streamlit UI and see how our model does.
c. UI Setup & Demo
For the UI, we install Streamlit and import our utility functions:
import streamlit as st
from utils import curate_recommendations, invoke_bedrock, date_prompt
We then create a sidebar which has toggles to select the different user inputs that we have discussed:
st.sidebar.title("Curate Date Experiences With GenerativeAI")
# Supported locations
location = st.sidebar.selectbox(
'Location',
('New York, New York', 'San Fransisco, California')
)
# Distance willing to travel
miles = int(st.sidebar.slider("Distance Willing To Travel (Miles)", 0.0, 0.0, 10.0))
distance = miles * 1609.34 # converting to meters
# Food interests
cuisine = st.sidebar.selectbox(
'Cuisine',
('Thai Food', 'Italian Food', 'Indian Food', 'Chinese Food', 'Japanese Food', 'American Food')
)
# Other interests
secondary_activity = st.sidebar.selectbox(
'Post Food Activity',
('Comedy Show', 'Speakeasy', 'Rooftop Bar')
)

We then configure the "Submit" button to trigger the Google Places recommendations which we will then inject into the LangChain Prompt Template we have defined.
# generate recommendations
recommendations = curate_recommendations(location, distance, cuisine, secondary_activity)
food_recs = recommendations[0]
activity_recs = recommendations[1]
# structure prompt template
input_prompt = date_prompt.format(
location=location,
cuisine=cuisine,
food_recommendations=food_recs,
activity=secondary_activity,
activity_recommendations=activity_recs
)
We then take this prompt and pass it into the Bedrock Claude model for inference and display a sample output in the UI.
output = invoke_bedrock(input_prompt=input_prompt)
st.write(output)

Pretty neat! Note that you can adjust the code to include for more locations or further customization if you have even more ideas of further user inputs that could tailor the experience further.
3. Additional Resources & Conclusion
GenAI-Samples/Date-Idea-Generator at master · RamVegiraju/GenAI-Samples
The code for the sample can be found at the link above. With a pretty minimal setup we are able to solve a challenging day to day problem (not a bad one), that many of us face. Note that you can extend this example further as always, be it customizing the user inputs further or using something such as Agents to do something such as even booking a reservation for you. The possibilities are infinite with Generative AI and in coming articles, we'll continue to look at some of the practical real-world applications we can build with these types of tools.
As always thank you for reading and feel free to leave any feedback.
If you enjoyed this article feel free to connect with me on LinkedIn and subscribe to my Medium Newsletter.