Please login/register to apply for this job.
17 Jan 2012

Big Data Engineer

Peerindex – Posted by TechStartupJobs London, England, United Kingdom

Job Description

Hadoop/Big Data Engineer

Introduction
You: a “big data” engineer who has the experience and ability to process huge amounts of data, and who is interested in building a model for predicting who are the most influential people online. You enjoy working on problems that include:

  • processing billions of facts from the social web, identifying conversations and their topics
  • harvesting and analysing terabytes of data harvested from the social web to identify user interests and influencers by applying elegant maths that you and your buddies designed.
  • arguing with your others on how important a retweet is over a +1 on Google+

Who Are We?
We are PeerIndex (
www.peerindex.com) a company based at the heart of Silicon Roundabout leading the way to help companies and individuals understand and capitalise on the impact of their social influence on the web and off.

We are changing the face of social engagement and authority – we are about giving individuals ownership over their influence and the value of their data – and we are looking for people to join us in this adventure.

We are expanding our Data Engineering team – building an environment where the most creative of people can explore and build upon some of the largest data sets in the world — elegantly creating meaning out of the social web.

Our team is bold and brilliant and are looking for like-minded people. And we are about changing the world – not about punching the clock.

Who Are You?
You are a graduate in computer science or of similar experiences (not all education happens in unis); has had previous experience in crunching through large data sets using tools like Hadoop or Mahout; and can argue the finer points of why Java is a better language than PHP or Python for “big data” efforts or vice versa.

You are about holding your own in a software architecture discussion and can sense when the solution presented is not the right way forward. 

You are likely a closet maths geek (yes, you were calculating the odds of Spain winning the World Cup) – but your true calling is the ability to develop elegant code alongside other amazing people.

You will be responsible for:

  • Building and operating our large scale data pipeline (we call is PISAE). This pipeline is designed to ingest and processes hundreds of millions of items from the social web every day
  • Implementing and optimising our Hadoop stack (Hadoop, HDFS, Thrift, Hive, Pig, Azkaban) as well as enhancing these tools (contributing back to the open-source community)
  • Operationalising the predictive analytics designed by our Data Engineering team

We think our influence algorithms will take on the greater challenge – building the PageRank of People. Sure, you could go to Google, or you could be involved with the development and execution of a novel and groundbreaking service.

While we are currently based at London’s Old Street tech quarter, our organization is distributed over five time zones so we are also willing to speak with exceptional remotely located applicants.

What Must I Have?
At the very minimum, you must be able to work in the EU. Please – do not apply if you do not already have a valid work permit for the EU (unless you are from the US and then you could apply since our CTO is an American).

Additionally we would prefer that you would have some level experience in coding “big data” solutions – with an emphasis in Hadoop / Mahout solutions – or solid MapReduce solutions.  Leveraging the Hadoop or Cloudera framework is desired, we also play around with EMR and enjoy a bit of PHP and Python coding as well.

Additionally we prefer you have:

  • Java, Hadoop and Map-Reduce
  • A DVCS (git or mercurial)
  • Service-oriented architectures, web services, and Cloud technologies

Can I Get Bonus Points?
Sure – we would love it if you have an affection and experience in some of the functional languages that are making a splash (like Erlang and Scala). We also would like:

  • A Computer Science or similar quantitative degree is desired – but some of the best developers are self-taught autodidacts
  • Familiarity with languages like Grails, Haskell, Python, Scala, clojure
  • Experience in predictive modelling gained in a commercial-oriented technology business
  • Schema-less database architecture understanding, e.g. MongoDB, Cassandra, or CouchDB
  • …and you enjoy discussing the differences between the BrownCoats and the Alliance.

Okay – so how much do I make?
Well, we would be telling if we told you the salary right off the bat. But you can be sure that pay is appropriate with your experience and effort. And yes, you will have ownership in the company through our options package and other potential benefits.

How Do I Apply?
Simple: fill in the elements on the right and submit your cover letter (something brief), your CV and (more importantly) – point us to your personal blog, your Github and/or StackOverflow profile, any meetups you attend, and/or any OS projects you contribute to or mashups you have built. You choose what we should look at.

So if your idea of fun is spinning up 200 cloud-based servers and pulling down gigabytes of data to process through Mongo and Hadoop, then we’d really like to hear from you. Around here, we call that – a ‘Wednesday’. 

 

How to Apply

Meet the team at the TechStartupJobs Fair on the 18th of January or apply directly using the 'APPLY ONLINE' button below.

Job Categories: Development. Job Tags: big data, hadoop, and startup.

Endless.

2534 total views, 1 today

Apply for this Job

UA-20544118-3