Large Scale Data Analysis

Matt Walker

Subscribe to Matt Walker: eMailAlertsEmail Alerts
Get Matt Walker: homepageHomepage mobileMobile rssRSS facebookFacebook twitterTwitter linkedinLinkedIn

Related Topics: Apache Web Server Journal, Java Developer Magazine, Big Data on Ulitzer

Apache Web Server: Article

Four Paths to Java Parallelism

Finding their 'sweet spots'

Parallel programming in Java is becoming easier with tools such as the fork/join framework, Pervasive DataRush, Terracotta, and Hadoop. This article gives a high-level description of each approach, pointing you in the right direction to begin writing parallel applications of your own.

Boiling the Ocean of Data
Companies today are swimming in data. The increasing ease with which data can be collected, combined with the decreasing cost of storing and managing it, means huge amounts of information are now accessible to anyone with the inclination. And many businesses do have the desire. Internet search engines index billions of pages of content on the web. Banks analyze credit card transactions looking for fraud. While these may be industry-specific examples, there are others that apply broadly: business intelligence, predictive analytics, and regulatory compliance to name a few.

How can programmers handle the volumes of data now available? Even efficient algorithms take a long time to run on such enormous data sets. Huge amounts of processing need to be harnessed. The good news is that we already know the answer. Maybe parallel processing isn't new, but with multicore processors it has become increasingly accessible to everyone. The need and ability to perform complex data processing is no longer just in the realm of scientific computing, but increasingly has become a concern of the corporate world.

There's bad news too. While parallel processing is much more feasible, it isn't necessarily any easier. Writing complex and correct multithreaded programs is hard. There are many details of concurrency - thread management, data consistency, and synchronization to name a few - that must be addressed. Programming mistakes can lead to subtle and difficult-to-find errors. Because there is no "one-size-fits-all" solution to parallelism, you might find yourself focusing more on developing a concurrency framework and less on solving your original problem.

Making parallelism easier has been a topic of interest in the Java world lately. Java starts out with basic concurrency constructs, such as threading and synchronization, built into the language. It also provides a consistent memory model regardless of underlying hardware. Most programmers are now aware of (and perhaps vaguely intimidated by) the utilities of java.util.concurrent added in Java 5. At least these tools provide a step up from working with bare threads, implementing many abstractions you would likely need to create yourself. However, it remains very low-level with its power coming at the expense of requiring significant knowledge (and preferably concurrent programming experience) (Sun, 2006).

Fortunately, a number of tools and frameworks that attempt to simplify parallelism in Java are available, or soon will be. We'll look at four of these approaches with the intent of finding the "sweet spot" of each - where they provide the most benefit with least cost - financial, conceptual, or otherwise.

More Stories By Matt Walker

Matt Walker is a scientist at Etsy, where he is building out their data analytics platform and researching techniques for search and advertising. Previously, he worked as a researcher at Adtuitive and as an engineer at Pervasive Software. He holds an MS in computer science from UT and received his BS in electrical and computer engineering from Rice University.

More Stories By Kevin Irwin

Kevin Irwin is a senior engineer at Pervasive Software, working on performance and concurrency within the Pervasive DataRush dataflow engine. With 15 years of industry experience, he previously worked at companies including Sun and IBM, developing high-performance, scalable enterprise software. Kevin holds an MCS and a BA in mathematics and computer science from Rice University.

Comments (1) View Comments

Share your thoughts on this story.

Add your comment
You must be signed in to add a comment. Sign-in | Register

In accordance with our Comment Policy, we encourage comments that are on topic, relevant and to-the-point. We will remove comments that include profanity, personal attacks, racial slurs, threats of violence, or other inappropriate material that violates our Terms and Conditions, and will block users who make repeated violations. We ask all readers to expect diversity of opinion and to treat one another with dignity and respect.

Most Recent Comments
oletizi 12/09/08 11:58:12 AM EST

While it’s true that familiarity with concurrent programming principles is needed to make full use of all of Terracotta’s developer-facing features, the extensive library of Terracotta Integration Modules (TIMs) for use with third-party technologies allows many people to make use of Terracotta *without* needing to know anything about concurrent programming.

This can be seen to great effect in the high-scale reference web application we built to show how Terracotta is used in a real-world scenario. When you look at the code to examinator, you’ll find very little concurrency-aware code. All of the concurrency is handled inside the various TIMs used by the application (e.g., Spring Webflow, Spring MVC, Spring Security, …).

You can see a full list of available TIMs here:

Like Terracotta itself, all of these TIMs are open source and free for use in production.