Unified Profiles for Travelers and Guests on AWS (UPT) is a solution designed to bring together customer data from various internal systems into a single, comprehensive profile. By automatically sourcing, merging, de-duplicating, and centralizing guest information, UPT enables organizations to gain a 360-degree view of their customers. This unified profile serves as a valuable resource for marketing, sales, operations, and customer experience teams, facilitating personalized engagements, improved customer retention, and increased loyalty.
The solution provides a scalable means of transforming disparate booking, conversation, clickstream, etc ... data from source systems into profiles. It leverages rules-based and AI identity resolution, determining identity from collections of interactions. This approach allows for real time resolution and targeting in an event driven manner. It also reduces batch historical data transformation and integration time from months to weeks.
The solution provides downstream integration for marketing and customer service usecases. Dynamo and Amazon Connect Customer Profile integrations are provided out of the box. Profile data is in a format that can be sent to other platforms (eg. Salesforce) seamlessly.
The solution diagram can also be found in the UPT in Solutions Library link above.
deployment/developer-pipeline
scripts to run the developer pipelinedeployment/ucp-code-pipelines
CDK project with full solution CI/CD pipeline, used internally for developmentdeployment/build-s3-dist.sh
simplified build script that produces the artifacts necessary to publish the solution
source/storage
primary implementation of LCS (Low Cost Storage). This is the primary profile storage component of UPT, built on Aurora Postgressource/tah_lib
python package for business object transformationsource/tah-common
schemas for Travel and Hospitalitysource/tah-core
utility functions and AWS SDK wrappers for Travel and Hospitalitysource/ucp-async
invoked by ucp-backend for longer running use cases that would otherwise hit the 30 second timeout for API Gatewaysource/ucp-backend
primary API for the solution, sitting behind API Gatewaysource/ucp-batch
batch processing for input datasource/ucp-change-processor
subscribes to the Amazon Connect Customer Profile (ACCP) stream for changes, writes to S3 and optionally EventBridgesource/ucp-common
shared code to use across UPT componentssource/ucp-cp-indexer
dynamically indexes CP export streams to create ID indexing between UPT and CP, allowing usage of CP is a completely readless mannersource/ucp-cp-writer
dedicated lambda for writing to CP (Amazon Connect Customer Profiles)source/ucp-error
process and store errors from the various DLQs in a centralized DynamoDB tablesource/ucp-etls
Glue jobssource/ucp-fargate
Fargate tasks for identity resolutionsource/ucp-industry-connector-transfer
copies events from Industry Connector solution into an S3 bucket for UPT to process via Glue job (deprecated)source/ucp-infra
CDK project for all solution infrastructuresource/ucp-match
process csv files that contain match scores between profiles, produced by ACCP or optionally a third party service that customers can configuresource/ucp-merger
handles instances of merging in solution (rules based IR and AI IR)source/ucp-portal
Angular web app to use and manage UPTsource/ucp-real-time-transformer
Kinesis stream for real-time record ingestion -> Python Lambda that transforms the nested business object (tah-common schema) into flat ACCP objects (mapping stored in ucp-common's accp-mapping folder) -> Go Lambda that sends ACCP objects to ACCPsource/ucp-retry
received specific error types from ucp-error that should be retrieducp-s3-excise-queue-processor
functionality for hard deletes of profile information from s3 bucketssource/ucp-sync
maintains partitions for business object S3 buckets and triggers Glue jobs based on the customer's configurationsource/z-coverage
test output produced by the solution (prefixed with z for convenience to have test files at the end of CRs)vendor
contains all packages used by Go (this provides consistent builds without pipelines needing to go out and fetch packages, particularly important since we use internal packages that are not distributed)
develop
contains the latest code for the upcoming release.release/vX.X.X
release branches are created off develop at publish time to have a snapshot of the exact release code.feature/upt-[type]-[number]
feature branches are created off develop and used for development. When feature branches are pushed to remote, pipelines are kicked off to test the code and run the AWS Solutions developer pipeline.
- Go
- Not to use proxy,
export GOPROXY=direct
- Not to use proxy,
- Node
- Python
- python3 -m pip install coverage
- python3 -m pip install requests
- jq
- AWS CLI: make sure you have configured your AWS credential.
- Create two S3 buckets in your local AWS account
- One bucket for storing UPT artifacts, e.g.
<alias>-upt-artifacts
- Another bucket for storing regional assets. This should match the bucket created above with region appended, e.g.
<alias>-upt-artifacts-<region>
- One bucket for storing UPT artifacts, e.g.
- Clone the repository and navigate to the root directory (note: there will likely be errors if opened in an IDE until all the necessary packages are imported)
- Locate
env.example.json
insource/
. Create a copy namedenv.json
and update artifactBucket to be the bucket name you just created, without the region, e.g.<alias>-upt-artifacts
.env.json
is referenced in scripts to get local env settings- The
artifactBucket
is where all the artifacts (Lambda function binaries, zipped code, etc) will be stored. - Update the values for
ecrRepository
andecrPublishRoleArn
to match the repository and role in your account. These resources can be created by deploying theUptInfra
stack. - The unit tests can be configured to use a "test" cluster in AWS RDS, or a local Postgres Database running as a Container.
- To run unit and storage tests against an RDS cluster deployed to your AWS Account, run the
TestAuroraPosgres
test found insource/tah-core/aurora/aurora_test.go
. Alternatively, create the Aurora test cluster, with a secure connection, and input the parameters for that cluster in env.json. A Secret in SecretsManager will also need to be created with keys: username and password. Note that this is strictly for testing purposes. The solution itself deploys its own resources including its own cluster. The E2E tests use this solution cluster. - To run unit and storage tests against a locally deployed Postgres container, run the "run-local-aurora.sh" script found in the
source
folder. This script will create the necessary certificates to enable SSL, as well as scaffold the Secret in SecretsManager based on values in the env.json file.
- To run unit and storage tests against an RDS cluster deployed to your AWS Account, run the
- Pull in the latest code for the private tah packages
- Navigate to
source/
and runupdate-private-packages.sh
- Navigate to
- In the solutions library (UPT in Solutions Library) you will find the cloudformation template. Download the template, and in the AWS account you wish to deploy the solution go to Cloudformation, and deploy the solution. Reference the implementation guide for more information on deployment and parameters used to deploy the solution.
- Deploy each Lambda function's code individually (
source/upt-
prefixes excludingucp-common
,ucp-core
,ucp-infra
, anducp-portal
) using thedeploy-local.sh
script in each microservice's directory - Deploy infrastructure in
source/ucp-infra
using itsdeploy-local.sh
script- If the solution is not deployed, optionally add the
--skip-tests
option, as some of the tests may fail.
- If the solution is not deployed, optionally add the
- Run the React web app locally using its
deploy-local.sh
script- Optionally - deploy
ucp-portal
, the React web app for UPT, to CloudFront so it can be accessed via the CloudFront distribution's URL (can be found in CloudFormation output) - Update and invalidate cache using
deploy-dev.sh
. This will sync the UI you can can access via the CloudFormation output url with the code you have locally. - Note, front end is only accessible in this manner if the solution was deployed with the CFN Parameter on UI deployment set to CF (Cloudfront)
- Optionally - deploy
- Run
deploy-local-all.sh
to deploy everything at once (this will take a while to run)- If the solution is not deployed, then some of the tests may fail, and there will be no lambda functions to update. When first setting up the solution, run:
sh deploy-local-all.sh --skip-errors
. If the solution fails to deploy, add the--skip-tests
options:sh deploy-local-all.sh --skip-errors --skip-tests
- If the solution is not deployed, then some of the tests may fail, and there will be no lambda functions to update. When first setting up the solution, run:
- Follow the instructions in the Implementation Guide. If you choose to deploy locally, start in the section after Launch the stack. This will walk through setting up a user in Cognito, configuring a domain in Amazon Connect Customer Profiles, and configuring the domain. This also helps ensure our Implementation Guide is easy to follow for customers.
See CONTRIBUTING for more information.
This project is licensed under the Apache-2.0 License.
This solution collects anonymized operational metrics to help AWS improve the quality of features of the solution. For more information, including how to disable this capability, please see the implementation guide.