using the lambda-spark-executor library
I have been looking for a way to process a stream of data from spark to Cassandra without using EMR, and I found out about lambda-spark-executor. I read its description in the GitHub repository and I can't figure out how do we use it? Does it allow us to run spark on AWS Lambda without installing it on Nodes in a cluster? Will it support connecting to cassandra ? I really want to understand but could not find much documentation. Any help is very appreciated.
How to include empty folders in “s3 sync”?
How do I “associate an instance profile” with an environment on AWS?
EC2 hourly usage alarm
New domain on aws elastic beanstalk
How can i use wildcards in EC2 commands
The specified key does not exist - While copying S3 object from one bucket to another
Can inotifywait be used with a mounted S3 bucket?
Why can't I connect to AWS using awscli?
AWS EBS references
Cloud Foundry router cannot find api.xx.xxxx.com/info (AWS)
Queue with multiple columns?
Is there a way I can add DEFAULT cache control to a bucket in S3?
AWS datapipeline success based triggering
Amazon S3 Lifecycle rules Prefix to move files to Glacier with certain naming convention
How to link developer authenticated user across devices in Cognito
Does many AWS OpsWorks apps = many stacks? What about shared resources