Entries (RSS)  |  Comments (RSS)

Two great talks for Wednesday

Two great presentations I’ve recently read and want to share with you:

Tim O’Reilly and John Battelle‘s “Web Squared: Web 2.0 Five Years On“.

Read that, perhaps while watching/listening toKevin Kelly‘s riveting talk (from the TED EG conference) on the next 5000 days of the web (that’s right, we’re only 5000 days old) and the future of the Semantic web. Kelly says we’re building not a series of small machines on the Internet, but one gigantic thinking machine, approaching the connectivity level of the human mind.

Reblog this post [with Zemanta]

Posted by John Adams on July 8th, 2009

Read Full Post  |  View Comments

Velocity 2009

Last Tuesday, I was part of the Velocity 2009 Keynote, where I gave a talk entitled, “Fixing Twitter”. I covered the last year or so of work in improving Twitter to deal with the massive traffic and user loads we’ve been under and how we use metrics to destroy the fail-whale.

Details of the talk are available on the Veloctiy 2009 site.

You can Watch the presentation (off of blip.tv) and download a PDF containing all of the slides here.

Update: It looks like blip.tv and O’Reilly moved some links around. Page updated.

Posted by John Adams on June 28th, 2009

Read Full Post  |  View Comments

Predicting the End of the World with Mathematica

Frequently you want to predict when things are going to happen, and if it’s not the end of the world, it might be something occurring a bit sooner, such as your disk filling up.

First capture some data, with cron. We’re going to capture the free space in our database, once a day, so we’ll put something like this in cron, and set it for every night at midnight:

0 0 * * * ls -l /var/log/somefile | tail -1 >> /tmp/somefile_log

Wait a few days. We were looking at daily file growth, so we waited a full week to collect data. We had the luxury at the time.

Now you’ll have a file with a series of ls entries in it. Run those through awk, and capture the sizes of the files.

cat /tmp/somefile_log | awk '{ print $5 }'

At this point, it’s time to fire up Mathematica. Mathematica is a stunning piece of software by Wolfram Research, used for data visualization, scientific work, and in a number of industries.

First, let’s load the data into Mathematica.

What we’re going to do now is to copy the 1st data point into the Fit, and create a function that will allow us to predict the future.

(* Fit data to a curve using a polynominal model, make sure you insert the 1st data point or the curve fit will be bad *)
result =  Fit[data, {336660004864, x, x^2}, x]

(* current free space on our partition (312334824), refunding the 1st data point as that \
space is already alloc'd *)
diskfree = (312334824 * 1024) + Take[data, 1]

(* use the fit function to find doomsday *)
diskfreefunc =  diskfree - result

(* when x = 0 , we are dead *)
deathday =  NSolve[diskfreefunc == 0, x]
deathday = Take[x /. %, -1]
deathday = deathday[[1]]
DatePlus[datastartdate, N[deathday]]

So now we know when this data set will hit zero, we have the date of that failure, and an ability to graph when that will happen.

For this example, my data set turns into:

We now know that in ~64 days, we’ll run out of disk space. Prediction is pretty nice, huh?

For you stats types, you’ll want to know how good the fit is for this curve, and for that, we look at R2.

Posted by John Adams on June 25th, 2009

Read Full Post  |  View Comments

Memcached and MySQL – What good is it?

I posted this in response to a post on GigaOM, but it was such a long comment, I felt that it was worthy as a post on it’s own.

The workloads of social networking sites fall mostly into the ‘read lots, write once’ class (most of the web exists within this paradigm.) Regardless of the database company that’s responsible for the software, the main idea in scaling this read heavy workload is to remove the burden from the database and move it to distributed memory stores.

As an engineer, you want applications to pull from the same cache pool to reduce I/O pressure. To ensure that every machine isn’t replicating data in individual caches, you have to go distributed. That’s the win with memcached.

Putting a distributed cache between the application and the database increases performance and shares data across your application servers, something that the database cannot do on it’s own. The database has on-disk and in memory caching, but eventually you’ll run out of memory on a single host if your working set exceeds the host’s memory.

Memcached also covers up replication lag (MySQL is terrible at replication, Oracle not so much) in large environments by putting data into the distributed cache (Write-through caching) before the slave database has finished it’s writing. Data is available immediately to clients, before the replication has completed.

It will also provide a large amount of savings when you’re constantly executing that O(n x m) query to find out who is friends with whom on your social networking site.

This comes with a cost, though. Relational database functions, like joining across large data sets, and atomic operations, become very difficult to execute. Memcached becomes the central server, and there is always a fear that an important key will drop out of cache because of a random eviction.

It’s not without risk, either. Dependence on the cache can hurt you severely if lots of memcached servers fail (and they do fail), Leaving you in a ‘cold cache’ situation where it can take hours to repopulate your working set back into the cache pool.

Don’t question MySQL’s performance — relational databases are great, but they are not the only solution to storage problems. the two problems that are being solved here are, highly orthogonal.

I’d also like to state that the majority of alternate key-value store databases listed in Richard Jones’ article and in Lenoard Lin’s blog are really not ready for high production loads (with maybe the exception of Tokyo Cabinet, HDFS, and Cassandra). There is still a ton of ‘secret sauce’ the large sites are keeping quiet about in order to make these into effective data stores.

Lin states this in his review as well: “Your comfort-level running in prod may vary, but for most sane people, I doubt you’d want to.”

Tread lightly.

Posted by John Adams on May 17th, 2009

Read Full Post  |  View Comments

Announcing mod_memcache_block

I’m announcing the release of mod_memcache_block, a distributed IP blocking system for Apache, with rate limiting based on HTTP request code.

For many years I’ve had a need for a module like this — A distributed blocking system which could operate across large web serving clusters and register hits in a central store. With rate limiting, incrementing counters on a single host is fairly useless when you have hundreds of servers behind a load balancer.

An attacker could hit many machines within the limit period before being detected, because there would be no central count. By keeping the counts in a memcache pool, all servers share the same data.

It won’t defend against attacks coming from random proxy addresses (say, Tor), and might unfairly count hundreds of users who live behind a single proxy (like corporate NAT), but it offers some protection against attacks coming from a single source IP.

The software is released under the Apache 2.0 Open Source License.

From the docs:

mod_memcache_block is an Apache module that allows you to block access to your servers using a block list stored in memcache. It also offers distributed rate limiting based on HTTP response code.

FEATURES

Distributed White and Black listing of IPs, ranges, and CIDR blocks
Configurable timeouts, memcache server listings
Support for continuous hasing using libmemcached’s Ketama
Windowded Rate limiting based on Response code (to block brute-force dictionary attacks against .htpasswd, for example)

REQUIREMENTS

libmemcached-0.25 or better
Memcached server
Apache 2.x (tested with 2.2.11)

Source code is available here:
http://github.com/netik/mod_memcache_block

If you would like to work on mod_memcache_block, contact me with your GitHub username and I’ll give you commit access on github.

Posted by John Adams on May 7th, 2009

Read Full Post  |  View Comments

Velocity Preview

There’s a small interview with me in today’s O’Reilly radar, where I talk about some of the things that I’ll be presenting as part of my Velocity 2009 talk. You can listen to, and read the transcript here:

Posted by John Adams on May 7th, 2009

Read Full Post  |  View Comments

Using GPS to enhance social networking

A bit of last-minute news, but I’ll be on a panel at SXSW Interactive: “Using GPS & Location to Enhance Social Networking”.

First there were social networks, and then there were location-based social networks, and now GPS and navigation-enhanced mobile social networks. This panel will explore how these emerging platforms integrate with existing social networks (facebook, twitter, etc), leverage GPS navigation functionality, and take location-aware social networking to the next level.

It’s at 5pm on Tuesday, the 17th of March.

More details at available on the panel’s schedule page.

Posted by John Adams on February 13th, 2009

Read Full Post  |  View Comments

Twitter in New York magazine

US Airways Flight 1549 Plane Crash Hudson in N...
Image by davidwatts1978 via Flickr

Normally I don’t re-post Twitter articles here but this one on the New York magazine was wistful, fair, balanced, and gave a good representation of what it’s like to work here.

The reporter was in the office on the very day the US Airways flight crashed into the Hudson, and he recorded our (completely boring) reactions to the event.


Sure, the Twitter guys still have no idea how to make money off their fabulous invention. But for now they are living in a dreamworld of infinite possibilities, maybe the last one on Earth.

How Tweet it Is – New York Magazine

Reblog this post [with Zemanta]

Posted by John Adams on February 9th, 2009

Read Full Post  |  View Comments

Find all the virtual hosts on a single IP

A little diagram of an IP address (IPv4)
Image via Wikipedia

Since people started using virtual hosts by name with Apache HTTPD and other web servers, it has become very difficult to figure out which virtual hosts live on a single IP, if all you have is the IP address.

Have a look at the Robtex Internet Swiss Army Knife. It solves this problem, and far more, including AS# lookups, BGP dereferencing, and DNS checks. There’s a firefox search toolbar available for the site (very useful!) and RBL (blacklist) check tools right on the main page of the site.

Reblog this post [with Zemanta]

Posted by John Adams on February 2nd, 2009

Read Full Post  |  View Comments

Finding usernames through iTunes DAAP

Often on our local network, someone will start using up all of our outbound Internet bandwidth, and this leads to the network administrator’s dilemma:

How do we find the user in question so we can thump them on the head to make them stop?

This is a basic exercise in information gathering. For the most part, we’ll have the user’s IP address, and we’re a mac shop with many users running iTunes. If the user is sharing their library, you can use iTunes as a covert means of determining a user’s name, as iTunes will use the local computer’s name as the library name.

Telnet to the machines DAAP port, and issue:


John-adamss-macbook-pro:~ jna$ telnet x.x.x.x 3689
Trying x.x.x.x...
Connected to x.x.x.x.
Escape character is '^]'.
GET /server-info HTTP/1.1
Host: x.x.x.x
Client-DAAP-Version: 3.7
User-Agent: iTunes/8.0.2 (Macintosh; N; Intel)
Accept-Language: en-us, en;q=0.50

HTTP/1.1 200 OK
Date: Tue, 13 Jan 2009 21:26:38 GMT
DAAP-Server: iTunes/8.0.2 (Mac OS X)
Content-Type: application/x-dmap-tagged
Content-Length: 280

msrvmstt?mproaproaeSVaeFPatedmsedmsmlmsmOk?[minmUSER NAME’s LibrarymslrmstmsalmsasmsupmspimsexmsbrmsqymsixmsrsmsdcmstcImmsto???

Other options for this include attempting to sign on to the server with Apple-K if AFP on TCP port 548 is active (which will reveal the computer’s name) and using nmap with service detection to glean information about the host.

Posted by John Adams on January 13th, 2009

Read Full Post  |  View Comments