Intel “Meltdown” and “Spectre” Vulnerabilities

With the recent announcement of the vulnerabilities known as “Spectre” and “Meltdown” in Intel processors we have made this post to inform our users how to protect their virtual machines of our products launched via cloud marketplaces.

Products Launched via Docker Containers

Docker uses the host’s system kernel. Refer to your host OS’s documentation on applying the necessary kernel patch.

Products Launched via the AWS Marketplace

The following product versions are using kernel 4.9.62-21.56.amzn1.x86_64 which needs updated.

  • Renku Language Detection Engine 1.0.0
  • Prose Sentence Extraction Engine 1.0.0
  • Sonnet Tokenization Engine 1.0.0
  • Idyl E3 Entity Extraction Engine 3.0.0

Run the following commands on each instance:

sudo yum update
sudo reboot
uname -r

The output of the last command will an updated kernel version of 4.9.76-3.78.amzn1.x86_64 (or newer). Details are available on the AWS Amazon Linux Security Center.

Products Launched via the Azure Marketplace

The following product versions are running on CentOS 7.3 on kernel 3.10.0-514.26.2.el7.x86_64 which needs updated.

  • Renku Language Detection Engine 1.0.0
  • Prose Sentence Extraction Engine 1.0.0
  • Sonnet Tokenization Engine 1.0.0
  • Idyl E3 Entity Extraction Engine 3.0.0

Run the following commands on each virtual machine:

sudo yum update
sudo reboot
uname -r

The output of the last command will show an updated kernel version of 3.10.0-693.11.6.el7.x86_64 (or newer). For more information see the Red Hat Security Advisory and the announcement email.

 

 

Orchestrating NLP Building Blocks with Apache NiFi for Named-Entity Extraction

This blog post shows how we can create an NLP pipeline to perform named-entity extraction on natural language text using our NLP Building Blocks and Apache NiFi. Our NLP Building Blocks provide the ability to perform sentence extraction, string tokenization, and named-entity extraction. They are implemented as microservices and can be deployed almost anywhere, such as AWS, Azure, and as Docker containers.

At the completion of this blog post we will have a system that reads natural language text stored in files on the file system, pulls out the sentences of the each, finds the tokens in each sentence, and finds the named-entities in the tokens.

Apache NiFi is an open-source application that provides data flow capabilities. Using NiFi you can visually define how data should flow through your system. Using what NiFi calls “processors”, you can ingest data from many data sources, perform operations on the data such as transformations and aggregations, and then output the data to an external system. We will be using NiFi to facilitate the flow of text through our NLP pipeline. The text will be read from plain text files on the file system. We will then:

  • Identify the sentences in input text.
  • For each sentence, extract the tokens in the sentence.
  • Process the tokens for named-entities.

To get started we will stand up the NLP Building Blocks. This consists of the following applications:

We will launch these applications using a docker-compose script.

git clone https://github.com/mtnfog/nlp-building-blocks
cd nlp-building-blocks
docker-compose up

This will pull the docker images from DockerHub and run the containers. We now have each NLP building block up and running. Let’s get Apache NiFi up and running, too.

To get started with Apache NiFi we will download it. It is a big download at just over 1 GB. You can download it from the Apache NiFi Downloads page or directly from a mirror at this link for NiFi 1.4.0. Once the download is done we will unzip the download and start NiFi:

unzip nifi-1.4.0-bin.zip
cd nifi-1.4.0/bin
./nifi.sh start

NiFi will start and after a few minutes it will be available at http://localhost:8080/nifi. (If you are curious you can see the NiFi log under logs/nifi-app.log.) Open your browser to that page and you will see the NiFi canvas as shown below. We can now design our data flow around the NLP Building Blocks!

If you want to skip to the meat and potatoes you can get the NiFi template described below in the nlp-building-blocks repository.

Our source data is going to be read from text files on our computer stored under /tmp/in/. We will use NiFi’s GetFile processor to read the file. Add a GetFile processor to the canvas:


Right-click the GetFile processor and click Configure to bring up the processor’s properties. The only property we are going to set is the Input Directory property. Set it to /tmp/in/ and click Apply:

We will use the InvokeHTTP processor to send API requests to the NLP Building Blocks, so, add a new InvokeHTTP processor to the canvas:

This first InvokeHTTP processor will be used to send to the data to Prose Sentence Detection Engine to extract the sentences in the text. Open the InvokeHTTP processor’s properties and set the following values:

  • HTTP Method – POST
  • Remote URL – http://localhost:7070/api/sentences
  • Content Type – text/plain

Set the processor to autoterminate for everything except Response. We also set the processor’s name to ProseSentenceExtractionEngine. Since we will be using multiple InvokeHTTP processors this lets us easily differentiate between them. We can now create a connection between the GetFile and InvokeHTTP processors by clicking and drawing a line between them. Our flow right now reads files from the filesystem and sends the contents to Prose:

The sentences returned from Prose will be in a JSON array. We can split this array into individual FlowFiles with the SplitJson processor. Add a SplitJson processor to the canvas and set its JsonPath Expression property to $.* as shown below:

Connect the SplitJson processor to the ProseSentenceExtractionEngine processor for the Response relationship. The canvas should now look like this:

Now that we have the individual sentences in the text we can send those sentences to Sonnet Tokenization Engine to tokenize the sentences. Similar to before, add an InvokeHTTP processor and name it SonnetTokenizationEngine. Set its method to POST, the Remote URL to http://localhost:9040/api/tokenize, and the Content-Type to text/plain. Automatically terminate every relationship except Response. Connect it to the SplitJson processor using the Split relationship. The result of this processor will be an array of tokens from the input sentence.

While we are at it, let’s go ahead and add an InvokeHTTP processor for Idyl E3 Entity Extraction Engine. Add the processor to the canvas and set its name to IdylE3EntityExtractionEngine. Set its properties:

  • HTTP Method – POST
  • Remote URL – http://localhost:9000/api/extract
  • Content-Type – application/json

Connect the IdylE3EntityExtractionEngine processor to the SonnetTokenizationProcessor via the Response relationship. All other relationships can be set to autoterminate. To make things easier to see, we are going to add an UpdateAttribute processor that sets the filename for each FlowFile to a random UUID. Add an UpdateAttribute processor and add a new property called filename with the value ${uuid}.txt. We will also add a processor to write the FlowFiles to disk so we can see what happened during the flow’s execution. We will add a PutFile processor and set its Directory property to /tmp/out/.

Our finished flow looks like this:

To test our flow we are going to use a super simple text file. The full contents of the text file are:

George Washington was president. This is another sentence. Martha Washington was first lady.

Save this file as /tmp/in/test.txt.

Now, start up the NLP Building Blocks:

git clone https://github.com/mtnfog/nlp-building-blocks
cd nlp-building-blocks
docker-compose up

Now you can start the processors in the flow! The file /tmp/in/test.txt will disappear and three files will appear in /tmp/out/. The three files will have random UUIDs for filenames thanks to the UpdateAttribute processor. If we look at the contents of each of these files we see:

First file:

{"entities":[{"text":"George Washington","confidence":0.96,"span":{"tokenStart":0,"tokenEnd":2},"type":"person","languageCode":"eng","extractionDate":1514488188929,"metadata":{"x-model-filename":"mtnfog-en-person.bin"}}],"extractionTime":84}

Second file:

{"entities":[],"extractionTime":7}

Third file:

{"entities":[{"text":"Martha Washington","confidence":0.89,"span":{"tokenStart":0,"tokenEnd":2},"type":"person","languageCode":"eng","extractionDate":1514488189026,"metadata":{"x-model-filename":"mtnfog-en-person.bin"}}],"extractionTime":2}

The input text was broken into three sentences so we have three output files. In the first file we see that George Washington was extracted as a person entity. The second file did not have any entities. The third file had Martha Washington as a person entity. Our NLP pipeline orchestrated by Apache NiFi read the input, broke it into sentences, broke each sentence into tokens, and then identified named-entities from the tokens.

This flow assumed the language would always be English but if you are unsure you can add another InvokeHTTP processor to utilize Renku Language Detection Engine. This will enable language detection inside your flow and you can route the FlowFiles through the flow based on the detected language giving you a very powerful NLP pipeline.

There’s a lot of cool stuff here but arguably one of the coolest is that by using the NLP Building Blocks you don’t have to pay per-request pricing that many of the NLP services charge. You can run this pipeline as much as you need to. And if you are in an environment where your text can’t leave your network, this pipeline can be run completely behind a firewall (just like we did in this post).

 

 

Idyl E3 2.5.1

Idyl E3We are in the process of publishing Idyl E3 2.5.1 to the AWS Marketplace and also to our website for download. The only change in 2.5.1 from 2.5.0 is a fix to address OpenNLP CVE-2017-12620. We have updated the Release Notes to reflect this as well.

The details of the issue are explained in OpenNLP CVE-2017-12620. It is important that only models from trusted sources are used in Idyl E3. Please be aware of a model’s origin whether it be a model that was downloaded from our website, created by you, or created by someone else in your organization.

Yahoo! Vespa and Entity Annotations

Some interesting news this week is that Yahoo! has open-sourced their software that drives many of their content recommendation systems. The software, called Vespa, is available at vespa.ai.

Annotations on words and phrases in the text can be provided as text is ingested into Vespa. This process is described in the Vespa Annotations API documentation. But in order to make these annotations you need something that can identify persons, places, and things in the text! Idyl E3 Entity Extraction Engine is perfect for this and here’s how:

You probably have a pipeline in which text is gathered from some source and eventually pushed to your search application, in this case we’re using Vespa. All that is needed is to modify your pipeline to first send the text to Idyl E3 to get the entities. Once a response is received from Idyl E3 the text along with its annotations can be sent on to Vespa. It really is that easy. You can customize the types of entities to extract through the entity models installed in Idyl E3. So you could annotate persons, places, and things like buildings, schools, and airports.

To recap, in case you have not yet read about Vespa it is worth a few minutes to read about. Its ability to ingest text with annotations makes a natural fit for Idyl E3. You can certainly use Idyl E3 to annotate text for Vespa now and we’re going to make some improvements to make working with Vespa even easier.

Idyl E3 2.6.0 Updates

Idyl E3As we work toward Idyl E3 2.6.0 we keep the Release Notes page updated with what’s new, tweaked, and fixed in 2.6.0. Probably the most significant new feature is support for GPUs.

Blacklisted Models

Less exciting but still useful is how models that fail to load are handled in 2.6.0. Previously when a model failed to load it would be retried the next time the model is needed. If nothing has changed that could help the model load then this can result in needlessly trying to load the model and failing. In 2.6.0 if a model fails to load it is added to a blacklist and Idyl E3 will not attempt to reload any model on the blacklist until Idyl E3 is restarted. A message will be included in Idyl E3’s log when a model is blacklisted.

A model can fail to load for a few reasons. The most common reasons are:

  • The model file defined in the manifest does not exist or cannot be read due to insufficient permissions.
  • The model’s encryption key is invalid.
  • The model’s license key is invalid.

IDYL_E3_HOME Environment Variable

Also noteworthy is the IDYL_E3_HOME environment variable that must be set. If you launch Idyl E3 through the AWS Marketplace it is taken care for you. If not, you just need to set IDYL_E3_HOME to the location where you extracted Idyl E3 (we recommend /opt/idyl-e3):

export IDYL_E3_HOME=/opt/idyl-e3

Most of Idyl E3’s scripts reference the IDYL_E3_HOME environment variable to know where to find its file.

Model Downloader

The last new thing we’ll mention here is the new tool included with Idyl E3 called the Model Downloader. When run, this command line tool shows you models available for download from us that you can download and install into your Idyl E3. No more downloading via your web browser and then having to copy to Idyl E3. You can now download models straight from Idyl E3. The tool will prompt you for a login (it is your Mountain Fog account username and password – register for free if you need a login) and then present you with a simple menu. The tool also supports a non-interactive mode so you can script the download of models!

We’ll give a more detailed look at the Model Downloader tool once 2.6.0 is released so stay tuned.

Model Download Tool

Idyl E3In Idyl E3 2.6.0 we will be introducing a command line tool to download entity, sentence, and token models directly from us. The tool will make getting and using Idyl E3 models much easier. You will no longer have to manually download a model, unzip it, and copy it to Idyl E3’s models directory. The tool will perform these steps for you. It will have both interactive and non-interactive modes so it can be integrated into provisioning scripts to automatically obtain models when deployed.

On our side, the tool will help us to more rapidly create models and make them available to you.

The tool will be bundled with Idyl E3 2.6.0 and will support all platforms.

Streaming Text in Idyl E3 2.5.0

Idyl E3The Idyl E3 API has an /extract endpoint that receives text and returns the extracted entities in response. This means you have to make a full HTTP connection for each extraction request. Idyl E3 2.5.0 introduces the ability to accept streaming text through a TCP socket. When Idyl E3 starts it will open a TCP port and listen for incoming text. As text is received the socket will extract entities from the text and return an entity extraction response.

Now you can extract entities from the command line using a tool like netcat:

cat some-file.txt | netcat [idyl-e3-ip-address] [port]

Compare that command with using cURL:

curl -X POST http://idyl-e3-ip-address:port/api/v2/extract -H "Content-Type: plain/text; charset=UTF-8" -d "George Washington was president."

It’s easy to see which command is simpler. Using streaming should make processing text files and other constant sources of text much simpler.

The response to streaming input is identical to the response received from the /extract endpoint. (Both commands above will produce the same output.)

{
   "entities":[
      {
         "text":"George Washington",
         "confidence":0.96,
         "span":{
            "tokenStart":0,
            "tokenEnd":2,
            "characterStart":0,
            "characterEnd":17
         },
         "type":"person",
         "languageCode":"eng",
         "context":"not-set",
         "documentId":"not-set",
         "extractionDate":1502970191843,
         "metadata":
         }
      }
   ],
   "extractionTime":72
}

Streaming is disabled by default. To enable it set the streaming.enabled property to true in Idyl E3’s properties file. Streaming does not currently support authentication. See the Idyl E3 Documentation for more streaming configuration options.

What’s New in Idyl E3 2.5.0

Idyl E3Here’s a quick summary of what’s new in Idyl E3 2.5.0. It’s not available yet but will be soon. For a full list check out the Idyl E3 Release Notes.

What’s New

  • English-language person entity models can now be trained using the ConLL-2003 format.
  • You can now create and use deep learning neural network entity models. Check out the previous blog posts for more information!
  • There’s a new setting that allows you to control how duplicate entities per extraction request are handled. You can choose to retain all duplicates or return only the duplicate entity with the highest probability.
  • A new TCP endpoint accepts streaming text. This endpoint deprecates the /ingest API endpoint.

What’s Changed

  • Idyl E3 2.5.0 changes all language codes to 3-letter ISO 3166 codes. While 2-letter codes are still supported we recommend using the 3-letter codes instead.

What’s Fixed

  • Entities extracted by a non-model method (regular expression, dictionary) used to return a value of 100.0 for the entity’s probability. Extracted entity probabilities should exist within the range 0 to 1 so these entities are now extracted with a probability of 1.0 instead of 100.0.

Deep Learning Entity Extraction in Idyl E3

Idyl E3 Entity Extraction Engine 2.5.0 will introduce entity extraction powered by deep learning neural networks. Neural networks are powerful machine learning algorithms that excel at tasks like natural language processing. Idyl E3 will also support entity model training and usage on GPUs as well as CPUs. Using GPUs provides significant performance improvements. Idyl E3 2.5.0 will add support for AWS’ P2 instance type.

Entity models created by a deep learning neural network will be referred to as “second generation models.” Entity models created by Idyl E3 2.4.0 and earlier will be referred to as “first generation models.”

So how are the current entity models going to be different than the deep learning entity models?

Good question. Training entity models with the Idyl E3 2.4.0 and earlier require you to identify “features” of your text in order to train the model. Some examples of features include where an entity appears in a sentence, what words surround it, if the word is capitalized, and so on. While you can create very powerful models using this method, identifying the features can be a laborious task that requires intimate knowledge of the text. It can also result in over-fitting causing the model to not apply well to non-training text.

When training a deep learning entity model there is no need to identify the features as the algorithm is able to learn the features on its own during the training. It is able to do this through word vectors. Idyl E3 2.5.0 will be able to use word vectors generated by word vector applications such as word2vec and GloVe. To create a deep learning entity model simply provide your input training text and word vectors and Idyl E3 will generate the model.

Can I customize the neural network used to train a model?

There will be many options available to customize the neural network used for model training with a standard set of options to be used out of the box. We will describe all of the available options in the Idyl E3 2.5.0 User’s Guide.

Will there be any other impacts of the new type of model training?

No. You can continue to use your existing first generation models. You can also continue to train new first generation models. In fact, you can use first and second generation models simultaneously in an Idyl E3 pipeline.

Any other questions that we did not cover? Let us know!

English-language “Places” model in Idyl E3 2.4.0 Analyst Edition

Idyl E3Idyl E3 2.4.0 now includes an English-language “Places” model as well as an English-language “Persons” model. Prior to version 2.4.0, only the persons models was included. Idyl E3 2.4.0 Analyst Edition will be available from the AWS Marketplace soon.

The model will be loaded automatically when Idyl E3 2.4.0 Analyst Edition starts. An entity extraction request such as “George Washington was president of the United States.” will return two entities:

  • George Washington (person)
  • United States (place)

AWS Marketplace

Idyl E3 2.4.0 comes with a free 30 day trial period in which you can use a single instance of Idyl E3 in AWS by only paying the cost of the underlying instance!

Training Definition File

In the next release of Idyl E3 Entity Extraction Engine (which will be version 2.4.0) we will introduce the Training Definition File to help alleviate a few problems.

The problems:

  1. When training an entity model there are quite a few command line arguments that you have to provide. The sheer number of arguments doesn’t help with usability.
  2. After training a model, unless you keep excellent documentation it’s easy to lose track of the training parameters. What entity type? Language? Iterations? Features? And so on.
  3. How do you manage the command line arguments and the feature generators XML file?

The Training Definition File offers a solution to these problems. It is an XML file that contains all of the training parameters. Everything. Now you have a record of the parameters used to create the model while also simplifying the command line arguments. Note that you can still use the command line arguments as they will remain available.

Below is an example of a training definition file. Note that the final format may change between now and the release.

<?xml version="1.0" encoding="UTF-8"?>
<trainingdefinition xmlns="https://www.mtnfog.com">
	<algorithm cutoff="1" iterations="1" threads="2" />
	<trainingdata file="person-train.txt" />
	<model file="person.bin" encryptionkey="enckey" language="en" type="person" />	
	<features>
		<generators>
			<cache>
				<generators>
					<window prevLength="2" nextLength="2">
						<tokenclass />
					</window>
					<window prevLength="2" nextLength="2">
						<token />
					</window>
					<definition />
					<prevmap />
					<bigram />
					<sentence begin="true" end="true" />
				</generators>
			</cache>
		</generators>
	</features>
</trainingdefinition>

You can see in the above XML that all of the entity model training parameters are included. The training definition file defines four things:

  1. The training algorithm.
  2. The training data.
  3. The output model.
  4. The feature generators.

This removes the need for a separate feature generators file since it is now included in the training definition file. Now when training an entity model you can use the simpler command:

java -jar idyl-e3-entity-model-generator.jar -td training-definition.xml

Look for the training definition file functionality to be included with Idyl E3 2.4.0. The details may change so check back for updates.

Idyl E3 Entity Extraction Engine AWS Reference Architectures

With the popularity of running Idyl E3 Entity Extraction Engine on AWS we wanted to provide some AWS reference architectures to help you get started deploying Idyl E3 to AWS. Don’t forget Idyl E3 is available on the AWS Marketplace for easy launching and we have some Idyl E3 CloudFormation templates available on GitHub. We offer managed Idyl E3 services is you prefer a hands-off approach to Idyl E3 deployment and operation.

A Few Notes Before Starting

Using a Pre-Configured AMI

No matter what architecture you choose we recommend creating a pre-configured Idyl E3 AMI and using it to launch new Idyl E3 instances. This method is recommended instead of relying on user-data scripts to perform the configuration because the time required to spin up a pre-configured AMI can be significantly less than user-data scripts. If you want to have the AMI configuration under source control I highly recommend using Hashicorp’s Packer to build the AMI.

Stateless API

Before we describe the architectures it is helpful to note that the Idyl E3 API is stateless. There is no session data necessary to be shared by multiple instances and as long as all Idyl E3 instances are configured identically (as they should be when behind a load balancer), it does not matter which instance gets routed the entity extraction request. We can take advantage of this stateless architecture to allow us to scale Idyl E3 up (and down) as much as we need to in order to meet the demands of the load.

Load-balanced Architecture

The first architecture is a very simple one yet probably adequate to meet the needs of most users. This architecture has a single VPC that contains two subnets. One subnet is public and it contains an Elastic Load Balancer (ELB) and the other subnet is private and it contains the Idyl E3 instances. In the diagram shown below, the ELB is set to be a public ELB allowing Idyl E3 requests to be received from the internet. However, if your application will also run in the VPC you can change the ELB to an internal ELB. Note that this architecture uses a fixed number of Idyl E3 instances behind the ELB. Any scaling up or down will have to be performed manually when needed. Idyl E3’s API has a /health endpoint that returns HTTP 200 OK when everything is ok and that is perfect for ELB instance health checks.

Simple Idyl E3 AWS Architecture with VPC and ELB

Load-balanced and Auto-scaling Architecture

Launch the Idyl E3 CloudFormation stack!

The previous architecture is a simple but very functional and it minimizes cost. The first thing that will be noticed in this architecture is the static nature of the Idyl E3 instances. To provide some flexibility we can modify this architecture a bit to put the Idyl E3 instances into an autoscaling group. We can use the group’s Desired Capacity to manually control the number of Idyl E3 instances or we can configure the autoscaling group to automatically scale up and down based on some chosen metrics. The average CPU usage is a good metric for scaling Idyl E3 because entity extraction can cause the CPU usage to rise. With that change here is what our architecture looks like now:

Idyl E3 AWS architecture with VPC, ELB, and autoscaling.

With the autoscaling we don’t have to worry about unexpected surges or decreases in entity extraction requests. The number of Idyl E3 instances will automatically scale up and down based on the average CPU usage of all Idyl E3 instances. Scaling down is important in order to keep costs to a minimum. Nobody wants to pay for more than what they need.

This architecture is available in our GitHub repository of Idyl E3 CloudFormation Templates. The template also contains an optional bastion instance to facilitate SSH access into the Idyl E3 instances from outside the VPC.

Need more?

Got more complicated requirements? Let us know. We have AWS certified engineers on staff and we’ll be glad to help.

Apache NiFi EQL Processor

We have published a new open source project on GitHub that is an Apache NiFi processor that filters entities through an Entity Query Language (EQL) query. When used along with the Idyl E3 NiFi Processor you can perform entity filtering in a NiFi dataflow pipeline.

To add the EQL processor to your NiFi pipeline, clone the project and build it or download the jar file from our website. Then copy the jar to NiFi’s lib directory and restart NiFi. The processor will not be available in the list of processors:

The EQL processor has a single property that holds the EQL query:

For this example our query will look for entities whose text is “George Washington”:

select * from entities where text = "George Washington"

Entities matching the EQL query will be outputted from the processor as JSON. Entities not matching the EQL query will be dropped.

With this capability we can create Apache NiFi dataflows that produce alerts when an entity matches a given set of conditions. Entities matching the EQL query can be published to an SQS queue, a Kafka stream, or any other NiFi processor.

The Entity Query Language previously existed as a component of the EntityDB project. It is now its own project on GitHub and is licensed under the Apache Software License, 2.0. The project’s README.md contains more examples of how to construct EQL queries.

Apache NiFi and Idyl E3 Entity Extraction Engine

We Apache NiFiare happy to let you know how Idyl E3 Entity Extraction Engine can be used with Apache NiFi. First, what is Apache NiFi? From the NiFi homepage: “Apache NiFi supports powerful and scalable directed graphs of data routing, transformation, and system mediation logic.” Idyl E3 extracts entities (persons, places, things) from natural language text.

That’s a very short description of NiFi but it is very accurate. Apache NiFi allows you to configure simple or complex processes of data processing. For example, you can configure a pipeline to consume files from a file system and upload them to S3. (See Example Dataflow Templates.) There are many operations you can do and they are performed by components called Processors. There are many excellent guides available about NiFi, such as:

There are many processors available for NiFi out of the box. One in particular is the InvokeHttp processor that lets your pipeline send an HTTP request. You can use this processor to send text to Idyl E3 for entity extraction from within your pipeline. However, to make things a bit simpler and more flexible we have created a custom NiFi processor just for Idyl E3. This processor is available on GitHub and its binaries will be included with all editions of Idyl E3 starting with version 2.3.0.

Idyl E3 NiFi Processor

Instructions for how to use the Idyl E3 processor will be added to the Idyl E3 documentation bit they are simple. Here’s a rundown. Copy the idyl-e3-nifi-processor.jar from Idyl E3’s home directory to NiFi’s lib directory. Restart NiFi. Once NiFi is available you will see the Idyl E3 in the list of processors when adding a processor:

Idyl E3 NiFi Processor

There are a few properties you can set but the only required property is the Idyl E3 endpoint. By default, the processor extracts entities from the input text but this can be changed using the action property. The available actions are:

  • extract (the default) to get a JSON response containing the entities.
  • annotate to return the input text with the entities annotated.
  • sanitize to return the input text with the entities removed.
  • ingest to extract entities from the input text but provide no response. (This is useful if you are letting Idyl E3 plugins handle the publishing of entities to a database or other service outside of the NiFi data flow.)

The available properties are shown in the screen capture below:

And that is it. The processor will send text to Idyl E3 for entity extraction via Idyl E3’s /api/v2/extract endpoint. The response from Idyl E3 containing the entities will be placed in a new idyl-e3-response attribute.

The Idyl E3 NiFi processor is licensed under the Apache Software License, version 2.0. Under the hood, the processor uses the Idyl E3 client SDK for Java which is also licensed under the Apache license.

Idyl NLP Annotation Format

Idyl E3’s entity model training tool expects entities in training text to be annotated in the format used by OpenNLP. This format uses START and END tags to denote entities:

<START:person> George Washington <END> was president.

This works great but it has a drawback. The annotations and text have to be combined in a single file. Once the text is annotated it becomes difficult to use the training text for any other purposes.

New Annotation Format

Idyl E3 2.4.0 is going to introduce an additional method of annotating text that allows the annotations to be stored separate from the training text. In 2.4.0 the annotations will be able to be stored in a separate file (and we plan to eventually support storing the annotations in a database). Even though Idyl E3 2.4.0 is not yet ready for prime time, we wanted to introduce this feature early in case you are in the middle of any annotation efforts and want to use the new format.

It is still required that the input text contain a single sentence per line. Use blank lines to indicate document boundaries. Here’s an example of a simple input training file:

George Washington was president .
He was president of the United States .
George Washington was married to Martha Washington .
In 1755 , Washington became the senior American aide to British General Edward Braddock on the ill-fated Braddock expedition .

And here’s the annotations stored in a separate file:

1 0 2 person
2 5 6 place
3 0 2 person
3 5 7 person
4 11 12 person

Here’s what this means. Each line in the annotations file represents an annotation in the training text. So there are 5 annotations in this example.

  • The first column is the line number that contains the entity. In this example there is an annotation in each of the 3 lines.
  • The second column is the token index of the start of the entity. Indexes are zero-based so the first token is zero!
  • The third column is the token index of the end of the entity.
  • The last column is the type of the entity.

Note that there are two entities in the third line and each is put on its own separate line in the annotations file. Specifying the entity text in the three column format simplifies the annotation by removing the need to specify the entity’s token start and end positions. This will only annotate the first occurrence of the entity text. (If Edward Braddock had occurred more than once in the input text on line 4 only the first occurrence would be annotated.)

Summary

Now your annotations can be kept separate from your training text allowing you to use your training text for other purposes. Additionally, we hope that this new annotation method helps decrease the time required for annotating and helps with automating the process. As mentioned earlier in the post, currently the only supported means of storing the annotations is in a separate file but we plan to extend this to support databases in a future release of Idyl E3.

The Entity Model Generator tool included in Idyl E3 has been updated to allow for using this new annotation format. You can, however, continue to use the OpenNLP-style annotations when creating entity models. This new annotation format is only available for entity models. Sentence, token, parts-of-speech, and lemma model annotations will remain unchanged in 2.4.0.

Idyl E3 SDK for Go

The Idyl E3 SDK for Go is now available on GitHub. This SDK allows you to integrate Idyl E3’s entity extraction capabilities into your Go projects.

Like the other Idyl E3 SDKs, the project is licensed under the Apache Software License, version 2.0.

It’s easy to use:

endpoint := "http://localhost:9000"
s := "George Washington was president."
confidence := 0
context =: "context"
documentID := "documentID"
language := "en"
key := "your-api-key"

response := Extract(endpoint, s, confidence, context, documentID, language, key)

 

New Feature Generators in Idyl E3 2.3.0

A feature generator is arguably the most important part of model-based entity extraction. The feature generators create “features” based on aspects of the input text that are used to determine what is and what is not an entity. Choosing the right (or wrong) features when training your entity models can have a significant impact on the performance of the models so we want you to have a good selection of feature generators available for use.

There are some new feature generators in Idyl E3 2.3.0 available to you that we’d like to take a minute to describe. All of the available feature generators and how to apply each one is described in the Idyl E3 2.3.0 Documentation.

New Feature Generators in Idyl E3 2.3.0

Special Character Feature Generator

This feature generator generates features for tokens that contains special characters. For example, the token Hello would not generate a feature but the token He*llo would generate a feature. This feature generator is probably most useful in the domains of science and healthcare, particularly chemical and drug names.

Token Part of Speech Feature Generator

This feature generator generates features based on each token’s part of speech. To use this feature generator you must provide a trained part of speech model. (Idyl E3 2.3.0 includes a tool for creating parts-of-speech models from your text.) This feature generator helps improve entity extraction performance by also being able to consider each entity’s part of speech.

Word Normalization Feature Generator

This feature generator normalizes tokens by replacing all uppercase characters with A, all lowercase characters with a, and all digits with 0. For example, the token HelloWorld25 would be normalized to AaaaaAaaaa00. This feature generator can optionally lemmatize each token prior to the normalization by applying a lemmatization model. (Idyl E3 2.3.0 includes a tool for creating lemmatization models from your text.)  Like the special character feature generator, this feature generator is also probably most useful in the domains of science and healthcare, particularly chemical and drug names.

 

 

Idyl E3 2.2.0

Today we are announcing the release of Idyl E3 2.2.0. (See the full Release Notes.) This version brings some new exciting features such as heuristic confidence filtering, support for all UTF-8 languages, and statistics reporting.

Idyl E3 2.2.0 can be downloaded from our website today. Look for it to be available on the AWS Marketplace in the upcoming week.

In related news:

 

Idyl E3 2.1.0

Idyl E3 2.1.0 has been released. This version introduces a new version of the API that includes changes to the extract and ingest endpoints. With version 2 of the API these two endpoints accept text in the body of the request instead of as a query string parameter. Version 1 of the API is still available so you do not need to update your clients unless you just want to or need to for other reasons. The Idyl E3 Java SDK and the Idyl E3 .NET SDK have been updated to use API v2.

Idyl E3 2.1.0 is based on a customized OpenNLP 1.7.0 which was released in early January 2016.  Previous versions of Idyl E3 were based on a customized OpenNLP 1.6.0.

Idyl E3 2.1.0 Analyst Edition will be available on the AWS Marketplace soon. The Analyst Edition includes all plugins and allows for the use of unlimited custom models without separate licensing. (See the Idyl E3 edition comparison.)