Squash Logs – Amazon Web Services

Posted in Big Data, Cloud Tagged , , ,

Right from start, this project necessitated creation of design which is scalable in the cloud.

The data-store needed to be cloud based.

The concept is that on a web-request, the html page is pulled up, as a static file – from Cloudfront – which will then use JavaScript to populate the Opponents and Locations. So basically each user will not have to hit the web server or the database at all, unless they add a new Opponent/Location, or save a game result. On saving a call to DynamoDB is made to save the squash game data.

Basically each user had 2 sets of data – opponent list and location list. He could add values to this data set any time through the log form.

For each user, we have created json location file and json opponent file to store data on S3/Cloudfront. Hence user data is loaded very fast, without any serve load. Whenever user edits this data, the json files are updated via Amazon Web Services, API call.

An example of the implemented URL format would be:  https://s3.amazonaws.com/bucket_name/opponent_list/user1.json  Here “user1.json” is a random name, so users cannot loop through others users opponent or locations, by randomly guessing names.

Once the user clicks save, the data is sent to DynamoDB datastore via Amazon Web Services API. Admin can also update these logs or delete them via API calls.

You can read more about this project here.

Adserver – Pretargeting

Posted in Big Data, Cloud Tagged , , , , , , , , , , , ,

Handling millions of ad-request per hour

The below diagram summarizes the implementation of the ad targeting system. Ad request and response are handled via an Apache server. Each ad request is saved in a log file on the filesystem. Each hour a new log file is generated. System is designed to handle atleast a million requests per hour.  Maximum server response time must not exceed 50 milliseconds.

Kafka is used to handle high loads. Cassandra is used as a scalable NoSQL solution. MySQL is used to store summary data in the system.

ad_flow_architecture

Synchronization of Hadoop Tasks.

Summary Data is also generated daily by using Amazon EMR implementation of Hadoop, and Amazon SWF for task synchronization. For reporting needs, the recent data that is not yet summarized, is fetched from Cassandra. The below workflow, explains the implemented flow.

workflow diagram