Full-Time Senior Software Engineer – Billerica
This is a new team
We are seeking a highly qualified Senior Software Engineers with experience in Java/J2EE, ETL and real-time/batch streaming technologies. The role requires a proven track record of professional excellence and a high willingness to be the driving force behind developing great software to serve our customers. The Senior Software Engineer will work on the i-Ready engineering team and will be responsible for contributing to architecture, design, and development of ETLs, reports, real-time data processing and data preparation for various purposes on AWS cloud infrastructure within an Agile software development life cycle.
- BuilTd scalable, efficient and high-performance pipelines/ workflows that are capable of processing large amounts of batch and real-time data.
- Build out our data service architecture to support internal and customer facing application use cases.
- Multidisciplinary work supporting real-time streams, ETL pipelines, data warehouses and reporting services.
- Use Big Data technologies such as Kafka, Data lake on AWS S3, EMR, Spark, Presto, and related technologies to store, move, and query data.
- Partner with team members to build and release features using CI tools like Git, Jenkins, and Maven/SBT.
- Follow coding best practices – Unit testing, design/code reviews, code coverage, documentation etc.
- Performance analysis and capacity planning for every release.
- Work effectively as part of an agile team.
- Bring new and innovative solutions to the table to resolve challenging software issues as they may develop throughout the product lifecycle.
What we’re looking for:
10+ years’ experience in designing and developing enterprise level software solutions.
- Strong experience with SQL and Relational databases.
- Experience working with the Agile/Scrum methodology.
- Experience with large volume data processing and big data tools such as Apache Spark and Presto.
- Experience with Amazon cloud computing infrastructure (MySQL RDS, Dynamo dB, AWS pipelines, etc.).
- Knowledge in stream processing technologies such as the Confluent Platform and Spark Streaming.
- Familiarity with Hadoop and the big data ecosystem.
We’d also love to see:
- Knowledge in MemSQL DB.
- Familiarity with JSON-RPC.
- Educational domain background.
Helpful tips on the role:
- 8-10+ years of experience
- Kafka and/or Spark are key, Scala nice to have – should have 2 out of 3
- SQL is also required
Looking for developers versus implementers – these are CODING positions.
That said, MUST show experience with either Kafka or Spark, Scala is a nice-to-have since much of the coding has been done in Scala, as is Java. Must have SQL, as in understanding query syntax and being able to write complex requests…
- amazing fully paid benefits, Base Compensation and Bonus program
705 total views, 1 today