Serving the intersection of cloud, big-data and natural language processing.
Below is a selection of the custom solutions we have created for clients. See our products that we have created for everyone.
Technical Document Classification
This project involved the classification of technical documents in order to predict a document’s movement through a workflow. Having a prediction about the document’s path through the workflow can assist technical writers with creating more effective documents. This solution utilized natural language processing to pre-process the documents, train a neural network document classifier, and evaluate the performance of the classifier on new data. The resulting model was made available as a web service that could be integrated with the client’s existing systems to provide quick feedback to the authors.
NLP Pipeline for Entity Ingestion
This project required extracting named-entities from large amounts of text files. Source data was ingested through Apache Kafka and pulled into a processing pipeline by Apache NiFi. NLP operations extracted named-entities, performed entity deduplication and enrichment, and persisted the entities to a database. This project allowed the client to connect many distributed data sources together and pull out insights from their data. The solution ran in AWS and used full infrastructure as code and automated deployment
#nlp #bigdata #aws
CI/CD Pipline in AWS
This project centered around the construction of a CI/CD pipeline in AWS. The project allowed the client to make faster deployments to their environments. Using tools such as Jenkins, SonarQube, and Nexus OSS, we helped the client build a pipeline to build, test, analyze, and push binaries and containers ready for deployment. Integrated notifications and alerting enabled faster response times to problems found. The pipeline was constructed using infrastructure-as-code to allow for repeatable pipelines.