Mid-level Cloud Software Engineer (JSON, Ruby, FOSS)
- Develop and maintain assigned analytic tasks requiring implementation, documentation and testing. Analytics will be developed primarily, but not exclusively, in the customer cloud infrastructure. The developer shall possess the necessary skills required to implement an end-to-end solution including (but not limited to) accessing existing datasets, creating new datasets by ingesting data, performing analytic functions and exposing analytic results to the users.
- Shall have demonstrated work experience with Serialization such as JSON and/or BSON.
- Shall have demonstrated work experience with developing restful services, Ruby on Rails framework, LDAP protocol configuration management and cluster performance management (e.g. Nagios).
- Shall have demonstrated work experience in the design and development of at least one Object Oriented System.
- Shall have demonstrated work experience developing solutions integrating and extending FOSS/COTS products.
- Shall have demonstrated technical writing skills and shall have generated technical documents in support of software development projects.
- In addition, the candidate will have demonstrated experience, work or college level courses, in at least 2 of the desired characteristics.
- Shall have demonstrated work experience with Source Code Management (e.g. Git, Stash, or Subversion, etc.)
Special Technical Skills Desired:
- Strong Pig and Java skills is required
- Familiarity developing analytics in the QTA (Query Time Analytics) environment
- Knowledge of network encryption metadata and/or hidden services
- Experience with Maven and web services development, Zoom workflows
- Familiarity with customer corporate tools such as DX
- Work in a team environment
- Experience deploying applications in a cloud environment.
- Understanding of Cloud Scalability.
- Hadoop/Cloud Certification.
- Experience designing and developing automated analytic software, techniques, and algorithms.
- Experience developing and deploying analytics that include foreign language processing; analytic processes that incorporate/integrate multi-media technologies, including speech, text; analytics that function on massive data sets, for example, more than a billion rows or larger than 10 Petabytes; analytics that employ semantic relationships (i.e. inference engines) between structured and unstructured data sets; analytics that identify latent patterns between elements of massive data sets, for example more than a billion rows or larger than 10 Petabytes; analytics that employ techniques commonly associated with Artificial Intelligence for example algorithms.
- Experience with data formats/techniques such as XML (Schema, XSL/T, XQuery), Streaming parsers (Stax or SAX, DOM), protobuf, or Avro
- Experience with taxonomy construction for analytic disciplines, knowledge areas and skills.
- Experience developing and deploying: data driven analytics, event driven analytics, sets of analytics orchestrated through rules engines.
- Experience with linguistics (grammar, morphology, concepts).
- Experience developing and deploying analytics that discover social networks.
- Experience documenting ontologies, data models, schemas, formats, data element dictionaries, software application program interfaces and other technical specifications.
- Experience developing and deploying analytics within a heterogeneous schema environment.
Minimum Experience Required:
- At least eight (8) years of general experience in software development/engineering, including requirements analysis, installation, integration, evaluation, enhancement, maintenance, testing, and problem diagnosis/resolution. .At least 5 years of experience in software-intensive projects and programs for government or industry customers.
- At least six (6) years of experience developing software with high level languages (such as Java, C, C++), and at least three (3) years developing software in UNIX/Linux (RedHat versions 3-5+) and software integration and testing (to include developing and implementing test plans and scripts).
- At least four (4) years of experience with distributed scalable Big Data Store (NoSQL) such as H Base, CloudBase/Accumulo, Big Table, etc., as well as Map Reduce programming model, the Hadoop Distributed File System (HDFS), and technologies such as Hadoop, Hive, Pig, Etc.
- Shall have demonstrated work experience with 1) Serialization such as JSON and/or BSON, 2) developing restful services, and 3) using source code management tools
- A bachelor’s degree in computer science, engineering, mathematics or a related discipline may be substituted for 4 years of general experience. A master’s degree is a plus.
- TS/SCI with Polygraph Required