Full Stack Engineer(Data Science Section) : DSAID
- 💼
- Rakuten, Inc.
- 📅
- 00013439 Requisition #
Job
Description
Rakuten group has almost 100 million customers
based in Japan and 1 billion globally as well, providing more than 70 services
in a variety such as ecommerce, payment services, financial services,
telecommunication, media, sports, etc. Following the strategic vision “Rakuten
as a data-driven membership company”, we are expanding our data activities
across our multiple Rakuten group companies. Our talented and driven team of
data scientists and engineers optimizes membership experiences using bigdata
and advanced machine learning.
We are seeking a Full-Stack Engineer to work
with us in the development and maintenance our data science solutions and
products.
This position requires an extensive knowledge
of technologies used in full-stack development while working closely with data
scientists for implementation of various products, such as geo and
recommendation solutions related data science products.
Responsibilities
The position is for a full-stack engineer
which supports all data science products development teams. Responsibilities
include but are not limited to:
- Design, develop, deploy and maintain various data science applications for high throughput, low latency APIs
- Design, build and maintain big data platform and batch / streaming data processing pipeline
- Working closely with data scientist in design and development of new data science solutions / products
Requirements
- 5+ years hands-on experience in software development
- Experience building solutions for high traffic websites
- Experience in at least one language for web backend application & data processing, such as Java, Python, etc.
- Experience in at least one RESTful framework for web backend application, such as Spring-boot, Flask, etc.
- Experience in HTML, CSS and JavaScript frameworks such as ReactJS, Angular, etc.
- Experience in database, such as Postgres, MongoDB, etc.
- Familiar with docker and Kubernetes.
- Familiar with various OSS and be able to adopt it to the system
- Experience of practical usage of Big Data technologies (HDFS, Hive, Spark) and Scala, Java or Python to utilize Hadoop/Spark to process large-scale datasets will be a big plus.