Apache NiFi for Processing PHI Data

With the recent release of Apache NiFi 1.10.0, it seems like a good time to discuss using Apache NiFi with data containing protected health information (PHI). When PHI is present in data it can present significant concerns and impose many requirements you may not face otherwise due to regulations such as HIPAA.

Apache NiFi probably needs little introduction but in case you are new to it, Apache NiFi is a big-data ETL application that uses directed graphs called data flows to move and transform data. You can think of it as taking data from one place to another while, optionally, doing some transformation to the data. The data goes through the flow in a construct known as a flow file. In this post we’ll consider a simple data flow that reads file from a remote SFTP server and uploads the files to S3. We don’t need to look at a complex data flow to understand how PHI can impact our setup.

Encryption of Data at Rest and In-motion

Two core things to address when PHI data is present is encryption of the data at rest and encryption of the data in motion. The first step is to identify those places where sensitive data will be at rest and in motion.

For encryption of data at rest, the first location is the remote SFTP server. In this example, let’s assume the remote SFTP server is not managed by us, has the appropriate safeguards, and is someone else’s responsibility. As the data goes through the NiFi flow, the next place the data is at rest is inside NiFi’s provenance repository. (The provenance repository stores the history of all flow files that pass through the data flow.) NiFi then uploads the files to S3. AWS gives us the capability to encrypt S3 bucket contents by default so we will use that through an S3 bucket policy.

For encryption of data in motion, we have the connection between the SFTP server and NiFi and between NiFi and S3. Since we are using an SFTP server, our communication to the SFTP server will be encrypted. Similarly, we will access S3 over HTTPS providing encryption there as well.

If we are using a multi-node NiFi cluster, we may also have the communication between the NiFi nodes in the cluster. If the flows only execute on a single node you may argue that encryption between the nodes is not necessary. However, what happens in the future when the flow’s behavior is changed and now PHI data is being transmitted in plain text across a network? For that reason, it’s best to set up encryption between NiFi nodes from the start. This is covered in the NiFi System Administrator’s Guide.

Encrypting Apache NiFi’s Data at Rest

The best way to ensure encryption of data at rest is to use full disk encryption for the NiFi instances. (If you are on AWS and running NiFi on EC2 instances, use an encrypted EBS volume.) This ensures that all data persisted on the system will be encrypted no matter where the data appears. If a NiFi processor decides to have a bad day and dump error data to the log there is a risk of PHI data being included in the log. With full disk encryption we can be sure that even that data is encrypted as well.

Looking at Other Methods

Let’s recap the NiFi repositories:

PHI could exist in any of these repositories when PHI data is passing through a NiFi flow. NiFi does have an encrypted provenance repository implementation and NiFi 1.10.0 introduces an experimental encrypted content repository but there are some caveats. (Currently, NiFi does not have an implementation of an encrypted flowfile repository.)

When using these encryption implementations, spillage of PHI onto the file system through a log file or some other means is a risk. There will be a bit of overhead due to the additional CPU instructions to perform the encryption. Comparing usage of the encrypted repositories with using an encrypted EBS volume, we don’t have to worry about spilling unencrypted PHI to the disk, and per the AWS EBS encryption documentation, “You can expect the same IOPS performance on encrypted volumes as on unencrypted volumes, with a minimal effect on latency.”

There is also the NiFi EncryptContent processor that can encrypt (and decrypt despite the name!) the content of flow files. This processor has use but in very specific cases. Trying to encrypt data at the level of the data flow for compliance reasons is not recommended due to the data possibly existing elsewhere in the NiFi repositories.

Removing PHI from Text in a NiFi Flow

PhilterWhat if you want to remove PHI (and PII) from the content of flow files as they go through a NiFi data flow? Check out our product Philter. It provides the ability to find and remove many types of PHI and PII from natural language, unstructured text from within a NiFi flow. Text containing PHI is sent to Philter and Philter responds with same text but with the PHI and PII removed.

Conclusion

Full disk encryption and encrypting all connections in the NiFi flow and between NiFi nodes provides encryption of data at rest and in motion. It’s also recommended that you check with your organization’s compliance officer to determine if there are any other requirements imposed by your organization or other relevant regulation prior to deployment. It’s best to gather that information up front to avoid rework in the future!

Need more help?

We provide consulting services around AWS and big-data tools like Apache NiFi. Get in touch by sending us a message. We look forward to hearing from you!

Some First Steps for a New NiFi Cluster

After installing Apache NiFi there are a few steps you might want to take before making your cluster available for prime time. None of these steps are required so make sure they are appropriate for your use-case before implementing them.

Lowering NiFi’s Log File Retention Properties

By default, Apache NiFi’s nifi-app.log files are capped at 100 MB per log file and NiFi retains 30 log files. If the maximum is reached that comes out to 3 GB of disk space from nifi-app.log files. That’s not a whole lot but in some cases you may need the extra disk space. Or, if an external service is already managing your NiFi log files you don’t need them hanging around any longer than necessary. To lower the thresholds open NiFi’s conf/logback.xml file. Under the appender configuration for nifi-app.log you will see a maxFileSize and maxHistory values. Lower these values, save the file, and restart NiFi to save disk space on log files. Conversely, if you want to keep more log files just increase those limits.

You can also make changes to the handling of the nifi-user.log and nifi-bootstrap.log files here, too. But those files typically don’t grow as fast as the nifi-app.log so they can often be left as-is. Note that in a cluster you will need to make these changes on each node.

Install NiFi as a Service

Having NiFi run as a service allows it to automatically start when the system starts and provides easier access for starting and stopping NiFi. Note that in a cluster you will need to make these changes on each node. To install NiFi as a system service, go to NiFi’s bin/ directory and run the following commands (on Ubuntu):

sudo ./nifi.sh install
sudo update-rc.d nifi defaults

You can now control the NiFi service with the commands:

sudo systemctl status nifi
sudo systemctl start nifi
sudo systemctl stop nifi
sudo systemctl restart nifi

If running NiFi in a container add the install commands to your Dockerfile.

Install (and use!) the NiFi Registry

The NiFi Registry provides the ability to put your flows under source control. It has quickly become an invaluable tool for NiFi. The NiFi Registry should be installed outside of your cluster but accessible to your cluster. The NiFi Registry Documentation contains instructions on how to install it, create buckets, and connect it to your NiFi cluster.

By default the NiFi Registry listens on port 18080 so be sure your firewall rules allow for the communication. Remember, you only need a single installation of the NiFi Registry per NiFi cluster. If you are using infrastructure-as-code to deploy your NiFi cluster make sure the scripts to deploy the NiFi Registry are outside the cluster scripts. You don’t want the NiFi Registry’s lifecycle to be tied to the NiFI cluster’s lifecycle. This allows you to create and teardown NiFi clusters without affecting your NiFi Registry. It also allows you to share your NiFi registry between multiple clusters if you need to.

Although using the NiFi Registry is not required to make a data flow in NiFI your life will be much, much easier if you do use the NiFi Registry, especially in an environment where multiple users will be manipulating the data flow.

Need more help?

We provide consulting services around AWS and big-data tools like Apache NiFi. Get in touch by sending us a message. We look forward to hearing from you!

Monitoring Apache NiFi with Datadog

One of the most common requirements when using Apache NiFi is a means to adequately monitor the NiFi cluster. Insights into a NiFi cluster’s use of memory, disk space, CPU, and NiFi-level metrics are crucial to operating and optimizing data flows. NiFi’s Reporting Tasks provide the capability to publish metrics to external services.

Datadog is a hosted service for collecting, visualizing, and alerting on metrics. With Apache NiFi’s built-in DataDogReportingTask, we can leverage Datadog to monitor our NiFi instances. In this blog post we are running a 3 node NiFi cluster in Amazon EC2. Each node is on its own EC2 instance.

Note that the intention of this blog post is not to promote Datadog but instead to demonstrate one potential platform for monitoring Apache NiFi.

If you don’t already have a Datadog account the first thing to do is to create one. Once done, the first thing you can do is install the Datadog Agent on your NiFi hosts. The command will look similar to the following except it will have your API key. In the command below we are installing the Datadog agent on an Ubuntu instance. If you just want to monitor NiFi-level metrics you can skip this step, however, we find the host-level metrics to be valuable as well.

DD_API_KEY=xxxxxxxxxxxxxxx bash -c "$(curl -L https://raw.githubusercontent.com/DataDog/datadog-agent/master/cmd/agent/install_script.sh)"

This command will download and install the Datadog agent on the system. The Datadog service will be automatically started and run automatically upon system start.

Creating a Reporting Task

The next step is to create a DataDogReportingTask in NiFi. In NiFi’s Controller Settings under Reporting Tasks, click to add a new Reporting Task and select Datadog. In the Reporting Tasks’ settings, enter your Datadog API key and change the other values as desired.

By default, NiFi reporting Tasks run every 5 minutes by default. You can change this period under the Settings tab under the “Run Schedule” if needed. Click Apply to save the reporting task.

The reporting task will now be listed. Click the Play icon to start the reporting task. Apache NiFi will now send metrics to Datadog every 5 minutes (unless you changed the Run Schedule value to a different interval).

Exploring NiFi Metrics in Datadog

We can now go to Datadog and explore the metrics from NiFi. Open the Metrics Explorer and enter “nifi” (without the quotes) and the available NiFi metrics will be displayed. These metrics can be included in graphs and other visuals in Datadog dashboards. (If you’re interested, the names of the metrics originate in MetricNames.java.)

Creating a Datadog Dashboard for Apache NiFi

These metrics can be added to Datadog dashboards. By creating a new dashboard, we can add NiFi metrics to it. For example, in the dashboard shown below we added line graphs to show the CPU usage and JVM heap usage for each of the NiFi nodes.

The DataDogReportingTask provides a convenient but powerful method of publishing Apache NiFi metrics. The Datadog dashboards can be configured to provide a comprehensive look into the performance of your Apache NiFi cluster.

What we have shown here is really the tip of the iceberg for making a comprehensive monitoring dashboard. With NiFi’s metrics and Datadog’s flexibility, how the dashboard is created is completely up to you and your needs.

Need more help?

We provide consulting services around AWS and big-data tools like Apache NiFi. Get in touch by sending us a message. We look forward to hearing from you!

Dataworks Summit 2019

Recently (in May 2019) I had the honor of attending and speaking at the Dataworks Summit in Washington D.C. The conference had many interesting topics and keynote speakers focused on big-data technologies and business applications. I also always enjoy exploring downtown Washington DC.  Whether it is doing the “hike” across the National Mall taking in the sights or visiting all of the nearby shopping, there’s always something new to see.

One thing that caught my attention early on was the number of talks that either focused largely on Apache NiFi or at least mentioned Apache NiFi as a component of a larger data ingest platform. Apache NiFi has definitely cemented itself squarely as a core piece of data flow orchestration.

My talk was one of those. In my talk that described a process to ingest natural language text, process it, and persist extracted entities to a database, Apache NiFi was the workhorse that drove the process. Without NiFi, I would have had to write a lot more code and probably ended up with a much less elegant and performant solution.

In conclusion, if you have not yet looked at Apache NiFi for your data ingest and transformation (think ETL) pipeline needs, do yourself a favor and spend a few minutes with NiFi. I think you will find what you like. And if you need help along the way drop me a note. Just say, “Hey Jeff, I need a pipeline to do X. Show me how it can be done with NiFi.” I’ll be glad to help.

Need more help?

We provide consulting services around AWS and big-data tools like Apache NiFi. Get in touch by sending us a message. We look forward to hearing from you!

Entity Extraction from Natural Language Text in a Data Flow Pipeline

Entity Extraction from Natural Language Text in a Data Flow Pipeline

This brief slide show illustrates using Idyl E3 Entity Extraction Engine with Apache NiFi to extract named-entities from text in a data pipeline.

Need more help?

We provide consulting services around AWS and big-data tools like Apache NiFi. Get in touch by sending us a message. We look forward to hearing from you!

Introducing NLP Flow

Today we are introducing NLP Flow, a collection of processors for the popular Apache NiFi data platform to support NLP pipeline data flows.

Apache NiFi is a cross-platform tool for creating and managing data flows. With Apache NiFi you can create flows to ingest data from a multitude of sources, perform transformations and logic on the data, and interface with external systems. Apache NiFi is a stable and proven platform used by companies worldwide.

Extending Apache NiFi to support NLP pipelines is a perfect fit. NLP Flow is, in Apache NiFi terminology, a set of processors that facilitate NLP tasks via our NLP Building Blocks. With NLP Flow, you can create powerful NLP pipelines inside of Apache NiFi to perform language identification, sentence extraction, text tokenization, and named-entity extraction. For example, an NLP pipeline to ingest text from HDFS, extract all named-person entities for English and Spanish text, and persist the entities to a MongoDB database can be managed and executed within Apache NiFi.

NLP Flow is free for everyone to use. An existing Apache NiFi (a free download) installation is required.

 NLP Flow

 

Using the NLP Building Blocks with Apache NiFi to Perform Named-Entity Extraction on Logical Entity Exchange Specifications (LEXS) Documents

In this post we are going to show how our NLP Building Blocks can be used with Apache NiFi to create an NLP pipeline to perform named-entity extraction on Logical Entity Exchange Specifications (LEXS) documents. The pipeline will extract a natural language field from each document, identify the named-entities in the text through a process of sentence extraction, tokenization, and named-entity recognition, and persist the entities to a MongoDB database.  While the pipeline we are going to create uses data files in a specific format, the pipeline could be easily modified to read documents in a different format.

LEXS is an XML, NIEM-based framework for information exchange developed for the US Department of Justice. While the details of LEXS are out of scope for this post, the keypoints is that it is XML-based, a mix of structured and unstructured text, and is used to describe various law enforcement events. We have taken the LEXS specification and created test documents for this pipeline. Example documents are also available on the public internet.

And just in case you are not familiar with Apache NiFi, it is a free (Apache-licensed), cross-platform application that allows the creation and execution of data flow processes. With Apache NiFi you can move data through pipelines while applying transformations and executing actions.

The completed Apache NiFi data flow is shown below.

NLP Building Blocks

This post requires that our NLP Building Blocks are running and accessible. The NLP Building Blocks are microservices to perform NLP tasks. They are:

Renku Language Detection Engine
Prose Sentence Extraction Engine
Sonnet Tokenization Engine
Idyl E3 Entity Extraction Engine

Each is available as Docker containers and on the AWS and Azure marketplaces. You can quickly start each building block as a Docker container using docker compose or individually:

Start Prose Sentence Extraction Engine:

docker run -p 8060:8060 -it mtnfog/prose:1.1.0

Start Sonnet Tokenization Engine:

docker run -p 9040:9040 -it mtnfog/sonnet:1.1.0

Start Idyl E3 Entity Extraction Engine:

docker run -p 9000:9000 -it mtnfog/idyl-e3:3.0.0

With the containers running we will next set up Apache NiFi.

Setting Up

To begin, download Apache NiFi and unzip it. Now we can start Apache NiFi:

apache-nifi-1.5.0/bin/nifi.sh start

We can now begin creating our data flow.

Creating the Ingest Data Flow

The Process

Our data flow process in Apache NiFi will follow this process. Each step is described in detail below.

  1. Ingest LEXS XML files from the file system. Apache NiFi offers the ability to read files from many sources (such as HDFS and S3) but we will simply use the local file system as our source.
  2. Execute an XPath query against each LEXS XML file to extract the narrative from each record. The narrative is a free text, natural language description of the event described by the LEXS XML file.
  3. Use Prose Sentence Extraction Engine to identify the individual sentences in the narrative.
  4. Use Sonnet Tokenization Engine to break each sentence into its individual tokens (typically words).
  5. Use Idyl E3 Entity Extraction Engine to identity the named-person entities in the tokens.
  6. Persist the extracted entities into a MongoDB database.

Configuring the Apache NiFi Processors

Ingesting the XML Files

To read the documents from the file system we will use the GetFile processor. The only configuration property for this processor that we will set is the input directory. Our documents are stored in /docs so that will be our source directory. Note that, by default, the GetFile processor removes the files from the directory as they are processed.

Extracting the Narrative from Each Record

The GetFile processor will send the file’s XML content to an EvaluateXPath processor. This processor will execute an XPath query against each XML document to extract the document’s narrative. The extracted narrative will be stored in the content of the flowfile. The XPath is:

/*[local-name()='doPublish']/*[local-name()='PublishMessageContainer']/*[local-name()='PublishMessage']/*[local-name()='DataItemPackage']/*[local-name()='Narrative']

Identifying Individual Sentences in the Narrative

The flowfile will now be sent to an InvokeHTTP processor that will send the sentence extraction request to Prose Sentence Extraction Engine. We set the following properties on the processor:

HTTP Method: POST
Remote URL: http://localhost:8060/api/sentences
Content Type: text/plain

The response from Prose Sentence Extraction engine will be a JSON array containing the individual sentences in the narrative.

Splitting the Sentences Array into Separate FlowFiles

The array of sentences will be sent to a SplitJSON processor. This processor splits the flowfile creating a new flowfile for each sentence in the array. For the remainder of the data flow, the sentences will be operated on individually.

Identifying the Tokens in Each Sentence

Each sentence is next sent to an InvokeHTTP processor that will call Sonnet Tokenization Engine. The properties set for this processor are:

HTTP Method: POST
Remote URL: http://localhost:9040/api/tokenize
Content Type: text/plain

The response from Sonnet Tokenization Engine will be an array of tokens (typically words) in the sentence.

Extracting Named-Entities from the Tokens

The array of tokens is next sent to an InvokeHTTP processor that sends the tokens to Idyl E3 Entity Extraction Engine for named-entity extraction. The properties to set for this processor are:

HTTP Method: POST
Remote URL: http://localhost:9000/api/extract
Content Type: application/json

Idyl E3 analyzes the tokens and identifies which tokens are named-person entities (like John Doe, Jerry Smith, etc.). The response is a list of the entities found along with metadata about each entity. This metadata includes the entity’s confidence value. This is a value from 0 to 1 that indicates Idyl E3’s confidence the entity is actually an entity.

Storing Entities in MongoDB

The entities having a confidence value greater than or equal to 0.6.0 will be persisted to a MongoDB database. In this processor, each entity will be written to the database for storage and further analysis by other systems. The properties to configure for the PutMongo processor are:

Mongo URI: mongodb://localhost:27017
Mongo Database Name: <Any database>
Mongo Collection Name: <Any collection>

You could just as easily insert the entities into a relational database, Elasticsearch, or another repository.

Pipeline Summary

That is our pipeline! We went from XML documents, did some natural language processing via the NLP Building Blocks, and ended up with named-entities stored in MongoDB.

Production Deployment

There’s a few things you may want to change for a production deployment.

Multiple Instances of Apache NiFi

First, you will likely want (and need) more than one instance of Apache NiFi to handle large volumes of files.

High Availability of NLP Building Blocks

Second, in this post we ran the NLP Building Blocks as local docker containers. This is great for a demonstration or proof-of-concept but you will want some high-availability of these services from a service like Kubernetes or AWS ECS.

You can also launch the NLP Building Blocks as EC2 instances via the AWS Marketplace. You could then plug the AMI of each building block into an EC2 autoscaling group behind an Elastic Load Balancer. This provides instance health checks and the ability to scale up and down in response to demand. They are also available on the Azure Marketplace.

Incorporate Language Detection in the Data Flow

Third, you may have noticed that we did not use Renku Language Detection Engine. This is because we knew beforehand that all of our documents are English. If you are unsure, you can insert a Renku Language Detection Engine processor in the data flow immediately after the EvaluateXPath processor to determine the text’s language and use the result as a query parameter to the other NLP Building Blocks.

Improve Performance through Custom Models

Lastly, we did not use any custom sentence, tokenization, or entity models. Each NLP Building Block includes basic functionality to perform these actions without custom models, but, using custom models will almost certainly provide a much higher level of performance. This is because the custom models will more closely match your data unlike the default models. The tools to create and evaluate custom models are included with the application – refer to each application’s documentation for the necessary steps.

Filtering Entities with Low Confidence

You may want to filter entities having a low confidence value in order to control noise. What the optimal threshold is depends on a combination of your data, the entity model being used, and how much noise your system can tolerate. in some use-cases it may be better to use a lower threshold out of caution. Each entity has an associated confidence value that can be used to filter.

Need Help?

Get in touch. We’ll be glad to help out. Send us a line a support at mtnfog.com.

Apache OpenNLP Language Detection in Apache NiFi

When making an NLP pipeline in Apache NiFi it can be a requirement to route the text through the pipeline based on the language of the text. But how do we get the language of the text inside our pipeline? This blog post introduces a processor for Apache NiFi that utilizes Apache OpenNLP’s language detection capabilities. This processor receives natural language text and returns an ordered list of detected languages along with each language’s probability. Your pipeline can get the first language in the list (it has the highest probability) and use it to route your text through your pipeline.

In case you are not familiar with OpenNLP’s language detection, it provides the ability to detect over 100 languages. It works best with text containing more than one sentence (the more text the better). It was introduced in OpenNLP 1.8.3.

To use the processor, first clone it from GitHub. Then build it and copy the nar file to your NiFi’s lib directory (and restart NiFi if it was running). We are using NiFi 1.4.0.

git clone https://github.com/mtnfog/nlp-nifi-processors.git
cd nlp-nifi-processors
mvn clean install
cp langdetect-nifi-processor/langdetect-processor-nar/target/*.nar /path/to/nifi/lib/

The processor does not have any settings to configure. It’s ready to work right “out of the box.” You can add the processor to your NiFi canvas:

You will likely want to connect the processor to a EvaluateJsonPath processor to extract the language from the JSON response and then to a RouteOnAttribute processor to route the text through the pipeline based on the language. Also, this processor will work with Apache NiFi MiNiFi to determine the language of text on edge devices. MiNiFi, for short, is a subproject of Apache NiFi that allows for capturing data into NiFi flows from edge locations.

Backing up a bit, why would we need to route text through the pipeline depending on its language? The actions taken further down in the pipeline are likely to be language dependent. For instance, the next step might be to tokenize the text but knowing how to tokenize it requires knowing what language it is. Or, if the next step is to send the text to an entity extraction process we need to know which entity model to use based on the language. So, language detection in an NLP pipeline can be a crucial initial step. A previous blog post showed how to use NiFi for an NLP pipeline and extending it with language detection to it would be a great addition!

This processor performs the language detection inside the NiFi process. Everything remains inside your NiFi installation. This should be adequate for a lot of use-cases, but, if you need more throughput check out Renku Language Detection Engine. It works very similar to this processor in that it receives text and returns a list of identified languages. However, Renku is implemented as a stateless, scalable microservice meaning you can deploy it as much as you need to in order to meet your use-cases requirements. And maybe the best part is that Renku is free for everyone to use without any limits.

Let us know how the processor works out for you!

Orchestrating NLP Building Blocks with Apache NiFi for Named-Entity Extraction

This blog post shows how we can create an NLP pipeline to perform named-entity extraction on natural language text using our NLP Building Blocks and Apache NiFi. Our NLP Building Blocks provide the ability to perform sentence extraction, string tokenization, and named-entity extraction. They are implemented as microservices and can be deployed almost anywhere, such as AWS, Azure, and as Docker containers.

At the completion of this blog post we will have a system that reads natural language text stored in files on the file system, pulls out the sentences of the each, finds the tokens in each sentence, and finds the named-entities in the tokens.

Apache NiFi is an open-source application that provides data flow capabilities. Using NiFi you can visually define how data should flow through your system. Using what NiFi calls “processors”, you can ingest data from many data sources, perform operations on the data such as transformations and aggregations, and then output the data to an external system. We will be using NiFi to facilitate the flow of text through our NLP pipeline. The text will be read from plain text files on the file system. We will then:

  • Identify the sentences in input text.
  • For each sentence, extract the tokens in the sentence.
  • Process the tokens for named-entities.

To get started we will stand up the NLP Building Blocks. This consists of the following applications:

We will launch these applications using a docker-compose script.

git clone https://github.com/mtnfog/nlp-building-blocks
cd nlp-building-blocks
docker-compose up

This will pull the docker images from DockerHub and run the containers. We now have each NLP building block up and running. Let’s get Apache NiFi up and running, too.

To get started with Apache NiFi we will download it. It is a big download at just over 1 GB. You can download it from the Apache NiFi Downloads page or directly from a mirror at this link for NiFi 1.4.0. Once the download is done we will unzip the download and start NiFi:

unzip nifi-1.4.0-bin.zip
cd nifi-1.4.0/bin
./nifi.sh start

NiFi will start and after a few minutes it will be available at http://localhost:8080/nifi. (If you are curious you can see the NiFi log under logs/nifi-app.log.) Open your browser to that page and you will see the NiFi canvas as shown below. We can now design our data flow around the NLP Building Blocks!

If you want to skip to the meat and potatoes you can get the NiFi template described below in the nlp-building-blocks repository.

Our source data is going to be read from text files on our computer stored under /tmp/in/. We will use NiFi’s GetFile processor to read the file. Add a GetFile processor to the canvas:


Right-click the GetFile processor and click Configure to bring up the processor’s properties. The only property we are going to set is the Input Directory property. Set it to /tmp/in/ and click Apply:

We will use the InvokeHTTP processor to send API requests to the NLP Building Blocks, so, add a new InvokeHTTP processor to the canvas:

This first InvokeHTTP processor will be used to send to the data to Prose Sentence Detection Engine to extract the sentences in the text. Open the InvokeHTTP processor’s properties and set the following values:

  • HTTP Method – POST
  • Remote URL – http://localhost:7070/api/sentences
  • Content Type – text/plain

Set the processor to autoterminate for everything except Response. We also set the processor’s name to ProseSentenceExtractionEngine. Since we will be using multiple InvokeHTTP processors this lets us easily differentiate between them. We can now create a connection between the GetFile and InvokeHTTP processors by clicking and drawing a line between them. Our flow right now reads files from the filesystem and sends the contents to Prose:

The sentences returned from Prose will be in a JSON array. We can split this array into individual FlowFiles with the SplitJson processor. Add a SplitJson processor to the canvas and set its JsonPath Expression property to $.* as shown below:

Connect the SplitJson processor to the ProseSentenceExtractionEngine processor for the Response relationship. The canvas should now look like this:

Now that we have the individual sentences in the text we can send those sentences to Sonnet Tokenization Engine to tokenize the sentences. Similar to before, add an InvokeHTTP processor and name it SonnetTokenizationEngine. Set its method to POST, the Remote URL to http://localhost:9040/api/tokenize, and the Content-Type to text/plain. Automatically terminate every relationship except Response. Connect it to the SplitJson processor using the Split relationship. The result of this processor will be an array of tokens from the input sentence.

While we are at it, let’s go ahead and add an InvokeHTTP processor for Idyl E3 Entity Extraction Engine. Add the processor to the canvas and set its name to IdylE3EntityExtractionEngine. Set its properties:

  • HTTP Method – POST
  • Remote URL – http://localhost:9000/api/extract
  • Content-Type – application/json

Connect the IdylE3EntityExtractionEngine processor to the SonnetTokenizationProcessor via the Response relationship. All other relationships can be set to autoterminate. To make things easier to see, we are going to add an UpdateAttribute processor that sets the filename for each FlowFile to a random UUID. Add an UpdateAttribute processor and add a new property called filename with the value ${uuid}.txt. We will also add a processor to write the FlowFiles to disk so we can see what happened during the flow’s execution. We will add a PutFile processor and set its Directory property to /tmp/out/.

Our finished flow looks like this:

To test our flow we are going to use a super simple text file. The full contents of the text file are:

George Washington was president. This is another sentence. Martha Washington was first lady.

Save this file as /tmp/in/test.txt.

Now, start up the NLP Building Blocks:

git clone https://github.com/mtnfog/nlp-building-blocks
cd nlp-building-blocks
docker-compose up

Now you can start the processors in the flow! The file /tmp/in/test.txt will disappear and three files will appear in /tmp/out/. The three files will have random UUIDs for filenames thanks to the UpdateAttribute processor. If we look at the contents of each of these files we see:

First file:

{"entities":[{"text":"George Washington","confidence":0.96,"span":{"tokenStart":0,"tokenEnd":2},"type":"person","languageCode":"eng","extractionDate":1514488188929,"metadata":{"x-model-filename":"mtnfog-en-person.bin"}}],"extractionTime":84}

Second file:

{"entities":[],"extractionTime":7}

Third file:

{"entities":[{"text":"Martha Washington","confidence":0.89,"span":{"tokenStart":0,"tokenEnd":2},"type":"person","languageCode":"eng","extractionDate":1514488189026,"metadata":{"x-model-filename":"mtnfog-en-person.bin"}}],"extractionTime":2}

The input text was broken into three sentences so we have three output files. In the first file we see that George Washington was extracted as a person entity. The second file did not have any entities. The third file had Martha Washington as a person entity. Our NLP pipeline orchestrated by Apache NiFi read the input, broke it into sentences, broke each sentence into tokens, and then identified named-entities from the tokens.

This flow assumed the language would always be English but if you are unsure you can add another InvokeHTTP processor to utilize Renku Language Detection Engine. This will enable language detection inside your flow and you can route the FlowFiles through the flow based on the detected language giving you a very powerful NLP pipeline.

There’s a lot of cool stuff here but arguably one of the coolest is that by using the NLP Building Blocks you don’t have to pay per-request pricing that many of the NLP services charge. You can run this pipeline as much as you need to. And if you are in an environment where your text can’t leave your network, this pipeline can be run completely behind a firewall (just like we did in this post).

 

 

Apache NiFi EQL Processor

We have published a new open source project on GitHub that is an Apache NiFi processor that filters entities through an Entity Query Language (EQL) query. When used along with the Idyl E3 NiFi Processor you can perform entity filtering in a NiFi dataflow pipeline.

To add the EQL processor to your NiFi pipeline, clone the project and build it or download the jar file from our website. Then copy the jar to NiFi’s lib directory and restart NiFi. The processor will not be available in the list of processors:

The EQL processor has a single property that holds the EQL query:

For this example our query will look for entities whose text is “George Washington”:

select * from entities where text = "George Washington"

Entities matching the EQL query will be outputted from the processor as JSON. Entities not matching the EQL query will be dropped.

With this capability we can create Apache NiFi dataflows that produce alerts when an entity matches a given set of conditions. Entities matching the EQL query can be published to an SQS queue, a Kafka stream, or any other NiFi processor.

The Entity Query Language previously existed as a component of the EntityDB project. It is now its own project on GitHub and is licensed under the Apache Software License, 2.0. The project’s README.md contains more examples of how to construct EQL queries.

Apache NiFi and Idyl E3 Entity Extraction Engine

We Apache NiFiare happy to let you know how Idyl E3 Entity Extraction Engine can be used with Apache NiFi. First, what is Apache NiFi? From the NiFi homepage: “Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic.” Idyl E3 extracts entities (persons, places, things) from natural language text.

That’s a very short description of NiFi but it is very accurate. Apache NiFi allows you to configure simple or complex processes of data processing. For example, you can configure a pipeline to consume files from a file system and upload them to S3. (See Example Dataflow Templates.) There are many operations you can do and they are performed by components called Processors. There are many excellent guides available about NiFi, such as:

There are many processors available for NiFi out of the box. One in particular is the InvokeHttp processor that lets your pipeline send an HTTP request. You can use this processor to send text to Idyl E3 for entity extraction from within your pipeline. However, to make things a bit simpler and more flexible we have created a custom NiFi processor just for Idyl E3. This processor is available on GitHub and its binaries will be included with all editions of Idyl E3 starting with version 2.3.0.

Idyl E3 NiFi Processor

Instructions for how to use the Idyl E3 processor will be added to the Idyl E3 documentation bit they are simple. Here’s a rundown. Copy the idyl-e3-nifi-processor.jar from Idyl E3’s home directory to NiFi’s lib directory. Restart NiFi. Once NiFi is available you will see the Idyl E3 in the list of processors when adding a processor:

Idyl E3 NiFi Processor

There are a few properties you can set but the only required property is the Idyl E3 endpoint. By default, the processor extracts entities from the input text but this can be changed using the action property. The available actions are:

  • extract (the default) to get a JSON response containing the entities.
  • annotate to return the input text with the entities annotated.
  • sanitize to return the input text with the entities removed.
  • ingest to extract entities from the input text but provide no response. (This is useful if you are letting Idyl E3 plugins handle the publishing of entities to a database or other service outside of the NiFi data flow.)

The available properties are shown in the screen capture below:

And that is it. The processor will send text to Idyl E3 for entity extraction via Idyl E3’s /api/v2/extract endpoint. The response from Idyl E3 containing the entities will be placed in a new idyl-e3-response attribute.

The Idyl E3 NiFi processor is licensed under the Apache Software License, version 2.0. Under the hood, the processor uses the Idyl E3 client SDK for Java which is also licensed under the Apache license.