Big Data Developer in Charlotte, North Carolina at AccruePartners

Date Posted: 7/19/2019

Job Snapshot

Job Description

AccruePartners values our contract and consulting employees. We offer a competitive benefits package to meet the diverse needs of all of our contractor and consulting employees and their family members. Here is a listing of what our company offers: 401(k) Medical, Dental, Vision, Life Insurance, Employee Assistance Program, Medical and Prescription Drug, Short and Long-Term Disability Insurance.


  • Fortune 100 Financial Services Company
  • 100-year history of dedication to customer satisfaction, success and growth
  • Tremendous growth and new business strategy leading to the need for new talent
  • Significant investments in cutting-edge technology


  • Culture: Excellent work environment that fosters collaboration
  • Growth: Ability to make an impact on the direction of the organization
  • Opportunity: Gain hands-on experience working with cutting-edge technology
  • Stability: Recent financial performance of the company has reported record profits


  • Charlotte, NC


  • Build data pipeline frameworks to automate high-volume and real-time data delivery for our Hadoop and streaming data hub
  • Build data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners
  • Transform complex analytical models into scalable, production-ready solutions
  • Continuously integrate and deploy code into cloud environments
  • Develop applications from ground up using modern technology stack such as Java/Scala/Python, Spark and NoSQL, Postgres/Snowflake
  • Build robust systems with an eye on the long-term maintenance and support of the application
  • Leverage reusable code modules to solve problems across the team and organization
  • Utilize a working knowledge of multiple development languages
  • Drive cross team design / development via technical leadership / mentoring
  • Understand complex multi-tier, multi-platform system
  • Construct data pipeline workflows and schedules using technologies like NIFI, sqoop, flume


  • BS in computer science
  • Working on Big Data Projects: 4 years (Required)
  • Spark Development: 3 years (Required)
  • Very good analytical skill and must have the ability to provide architectural solution using Hadoop, for a given use case.
  • Design and develop high-throughput, low-latency data processing pipelines
  • Research, evaluate and utilize new technologies, tools and frameworks
  • Experience of Python or Scala.
  • Experience in NoSQL/SQL database design, development and data modeling.
  • Proven experience with the Hadoop stack and NoSQL data store 
  • Good hands on development of ETL packages, ETL using SQL or a scripting/programming language 
  • Hands-on experience with Hadoop ecosystem technologies like HBase, MapReduce, Spark, Pig, Hive, Kafka, Impala, Oozie etc. 
  • Experience with large data sets – regularly transforming and querying large tables
  • Writing high-performance, reliable and maintainable code.
  • Good knowledge of database structures, theories, principles, and practices. 
  • Hands on experience in Impala/Hive.
  • Familiarity with data loading tools like Flume, Sqoop.
  • Knowledge of workflow/schedulers like Oozie.
  • Analytical and problem-solving skills, applied to Big Data domain
  • Proven understanding with Apache Hadoop, HBase, Apache Nifi, and Apache Spark.
  • Good aptitude in multi-threading and concurrency concepts.
  • Should have solid knowledge of SQL, database structures, principles, and theories.
  • Working knowledge of UNIX, setting up cron jobs
  • Previously worked on a project that followed the Agile methodology
  • Knowledge of Splunk a plus.
  • Prior experience constructing data pipeline workflows and schedules
  • Experience using Cloudera
  • Knowledge of Kafka
  • End to end knowledge of Hadoop
  • Good knowledge in back-end programming