Real-Time Geolocation Alerts

Real-Time Geolocation Alerts

We Digitized Logistics Tracking for Fleets Using Real-Time Data and AWS IoT.

Customer challenges

Our client required a real-time alert system to manage over 50,000 daily alerts and 1,500+ concurrent users with ultra-low latency. With no existing backend, a new architecture was needed to support scalable geospatial filtering, real-time processing, and single-session login enforcement. The backend had to handle tens of thousands of location updates per minute while managing token validation and cache control. Alerts from external sources like RSS and Waze required normalization, ingestion, and cleanup, along with long-term archival. Additionally, the client lacked DevOps automation and observability for deployment and monitoring.

Solutions

To address these challenges, a serverless and highly scalable architecture was implemented using AWS-managed services. Real-time communication was established through Amazon API Gateway using WebSocket protocols, allowing persistent connections with mobile and web clients. A custom Lambda authorizer validated user authentication tokens issued by Amazon Cognito and enforced single-session-per-user logic. Sessions were cached in Redis for quick access and replicated in DynamoDB as a fallback store. This enabled both performance and consistency, even during cache failures or concurrent logins.

User location updates were streamed every 15 seconds through WebSocket and processed by a dedicated Lambda function. These locations were indexed in Redis using geospatial data structures, enabling real-time lookups for nearby users and alerts. Alert data was fetched from multiple public RSS sources and third-party providers using Step Functions that executed every minute. These workflows triggered Lambda functions to ingest, filter, deduplicate, and store alerts in Redis with TTLs to ensure timely cleanup. A secondary Lambda regularly cleaned expired keys from Redis, and another archival Lambda converted near-expiry alerts into Parquet format for storage in Amazon S3, preserving historical records for compliance and analytics.

For scalability, the architecture used AWS CDK (in TypeScript) to define and deploy infrastructure-as-code. CI/CD pipelines were built using GitHub Actions, enabling fast and reliable deployments across development and production environments. Regions were isolated to avoid data collisions and maintain fault tolerance. Monitoring and logging were handled through Amazon CloudWatch, and critical system failures were sent to internal teams via Slack with deduplication to avoid noise. This gave the team full visibility into the health of the platform.

Architecture

AWS services used

AWS LambdaAmazon API Gateway (WebSocket)Amazon VPCAmazon CognitoAmazon S3Amazon DynamoDBAmazon ElastiCache (Redis)AWS Step FunctionsAmazon EventBridgeAmazon CloudWatchAWS CDKGitHub ActionsSlack Integration

Results

The real-time platform delivered responsive alerting and operational scalability:

  • Achieved millisecond-level latency from alert ingestion to delivery.

  • Supported over 1,500 concurrent users with stable real-time performance.

  • Enforced strict one-device login per user through efficient session control.

  • Enabled dynamic, geolocation-based alert delivery using Redis Geo.

  • Integrated multiple external alert providers with automated fetching.

  • Archived expired alerts in Parquet format on S3 for analytics and compliance.

  • Fully automated infrastructure setup using AWS CDK and GitHub Actions.

  • Maintained clear observability with CloudWatch logs and Slack alerting.

  • Scaled seamlessly to handle 50,000+ alerts daily with no performance drop.

Do You Have a Project?
Let’s Talk shape& Grow your Business

We're Ready to Assist You. Our Experts are Here, Just Drop us a Message.

Send Message