About Your Role
You will work alongside experts in various data-related disciplines(such as HPC, machine learning, cloud computing), etc. Your daily duties includes:
- Develop technical architecture to enable Analytics and Data Science using industry best practices for large scale processing;
- Design, develop and implement data pipelines for batch and streaming solutions;
- Research and develop distributed crawler and data acquisition system; optimize the crawling strategy and improve the crawling effect;
- Monitor data quality across data processing lifecycle;
- Cloud-based data infrastructure administration and database administration when necessary.
About Your Background
- Bachelor’s degree with 2- 6 years’ intensive experience in Data Engineering;
- Proficient with Python, R, Julia (at least one), SQL and Linux;
- Proficient in data pipeline development including batch and streaming processing;
- Familiar with concepts and techniques of crawling; Able to design and develop crawler system an advantage;
- Experience with cloud platforms, in particular, AWS a plus;
- Stay ahead of market with latest technology & knowledge sharing
- Senior Partner one-on-one mentorship
- Highly competitive compensation package
- Work in a flat structure where your talent gets noticed and promoted quickly
- Work with like-mind people who share common value of being highly motivated, meticulous in details, and systematic thinking.
Data collected will be used for recruitment purposes only. Personal data provided will be used strictly in accordance with the relevant data protection law of Hong Kong Special Administrative Region.