Book Review: Systems Performance: Enterprise and the cloud

sysperfcover_500

Welcome back for another book review. This time, I am going to review a book that I have bought when it came out, in late 2013. I have always wanted to do a review of this one but it seems I had two options:

  1.  Write a short review that probably does not do the book justice.
  2. Postpone the review for a more suitable time, when $IRL and $DAYJOB allow …

I opted for the second option, as I consider this book to be indispensable (yes, this is going to be a positive review). So, here is the table of contents:

  1. Introduction
  2. Methodology
  3. Operating Systems
  4. Observability Tools
  5. Applications
  6. CPUs
  7. Memory
  8. File Systems
  9. Disks
  10. Network
  11. Cloud Computing
  12. Benchmarking
  13. Case Study
  14. Appendices (which you SHOULD read)

Wow, a lot of contect, huh? (something to be expected, given that the book is more than 700+ pages). Do not let the size daunts you however. Chapters are self-contained, as the author understands that the book might be read under pressure, and contain useful exercises at the end.

What really makes this book stands out, is not the top-notch technical writing or abundance of useful one-liners, is the fact that the author moves forward and suggests a methodology for troubleshooting and performance analysis, as opposed to the ad-hoc methods of the past (or best case scenario a checklist and $DEITY forbid the use of “blame someone else methodology”). In particular the author suggests the USE methodology, USE standing for Utilization – Saturation – Errors, to methodically and accurately analyze and diagnose problems. This methodology (which can be adapted/expanded at will, last time I checked the book was not written in stone), is worth the price of the book alone.

The author correctly maintains that you must have an X-ray (so to speak) of the system at all times. By utilizing tools such as DTrace (available for Solaris and BSD) or the Linux equivalent SystemTap, much insight can be gained from the internals of a system.

Chapters 5-10 are self-explanatory: the author presents what the chapter is about, common errors and common one-liners used to diagnose possible problems. As said before, chapters aim to be self contained and can be read while actually troubleshooting a live system so no lengthy explanations there. At the end of the chapter, the bibliography section provides useful pointers towards resources for further study, something that is greatly appreciated. Finally, the exercises can be easily transformed to interview questions, which is another bonus.

Cloud computing and the special considerations that is presenting is getting its own chapter and the author tries to keep it platform agnostic (even if employed by a “Cloud Computing” company), which is a nice touch. This is followed by a chapter on useful advice on how to actually benchmark systems and the book ends with a, sadly too short, case study.

The appendices that follow should be read, as they contain a lot of useful one-liners (as if the ones in the book were not enough), concrete examples of the USE method, a guide of porting dtrace to systemtap and a who-is-who in the world of systems performance.

So how to sum up the book? “Incredible value” is one thought that comes to mind, “timeless classic” is another. If you are a systems {operator|engineer|administrator|architect}, this book is a must-have and should be kept within reach at all times. Even if your $DAYJOB does not have systems on the title, the book is going to be useful, if you have to interact with Unix-like systems on a frequent basis.

PS. Some reviews of this book complain about the binding of the book. In three physical copies that I have seen before my eyes, binding was of the highest quality so I do not know if this complain is still valid.

 

Conference review: Distributed Matters Berlin 2015

“Kept you waiting, huh?” – to start the post with a pop culture reference.

Yesterday, I was privileged enough to attend Distributed Matters Berlin 2015. The focus of the conference is, you guessed it, distributed systems, often within a NoSQL context. It was hosted at the awesome KulturBrauerei, a refurbished brewery. The format of the conference was 45 minute presentations, including Q&A, thankfully followed by a 15 minute break between talks, in two tracks. The overall level of the presentations was above the average and given that you could only attend one at a time, it made for a hard choice.

Owing to the greatness of Berlin taxi drivers (you know what I am talking about if you used a taxi in Berlin recently), I managed to attend only half of the keynote by @aphyr, so I am not going to comment on this one. My main takeaway is “always, always read the documentation carefully”.

The next presentation I attended was NoSQL meets Microservices, by Michael Hackstein. This one was labelled as beginner. It presented the main paradigms of the NoSQL landscape (KV/Graph/Document), certain topologies and then a presentation of the new-ish ArangoDB, a NoSQL based on V8 Javascript that claims to support all three paradigms at once, eliminating the need for multiple network hops. Overall, it was well presented, if a tad on the product side, and it served nicely to kickoff my conference experience.

After the coffee break, where I was lucky enough to meet some old colleagues from $DAYJOB-1, I attended A tale of queues, from ActiveMQ over Hazelcast to Disque. @xeraa presented his journey with various queueing solutions. He kicked off by stating that the hard problem in distributed systems is exactly once delivery and guaranteed delivery. He then presented the landscape of existing message queues, giving the rationale behind deciding what to use and, more importantly, what not to use. The talk was quite technical, giving me a lot of pointers for future research, overall a solid talk, well done!

It was followed by @pcalcado and No Free Lunch, Indeed: Three Years of Microservices at Soundcloud. Phil has amazing presentation skills and described the journey of Soundcloud from a monolithic Ruby on Rails app, towards a microservices oriented architecture. What I liked most about this presentation was not just the great technical content but also the honestly. Evolving your architecture is no trivial task and the road to it is full of potential pitfalls. Phil was kind enough to share some of his hard gained experience with us, greatly appreciated.

The lunch break was BAD, ’nuff said. Too long a queue and the food, by the time I got there, the good stuff was gone.

After the lunch, I attended Scalable and Cost Efficient Server Architecture by Matti Palosuo. One of the more solid talks, this no-frills presentation did what said on the tin: presented the service infrastructure behind EA’s Sim City Build It mobile title. Dealing with mobile, casual games¬† presents a unique challenge service-wise and Matti covered all angles in his presentation, diving deep into specifics of their implementation.

The next presentation was Containers! Containers! Containers! And now? by Michael Hausenblas. I am not going to comment a lot on this one, since it had no slides and it was more like a tech demo. Mesos is an AMAZING product and I would have preferred some technical discussion, as opposed to a hands-on demo, but hey! this is just me.

Microservices with Netflix OSS and Spring Cloud by Arnaud Cogoluegnes was the next presentation that I attended. It focused on FOSS software by Netflix and how it can be utilized by the form of Java decorators within an application context. Useful and well presented, the only thing I personally did not like was certain slides full of code but this does not take away from the value of the presentation. Bonus point is that, for a Java engineer, this presentation was immediately actionable, with some nice coding takeaways.

Before proceeding with the next presentation, the astute reader of this blog should have noticed by now a pattern forming: microservices. The topic of the next talk was no exception Microservices – stress-free and without increased heart-attack risk by Uwe Friedrichsen. I really loved this talk. Uwe has a strong opinion regarding microservices (and the experience to back it up). In a nutshell, while microservices can be viable, one should keep a clear head and not fall into the trap of hype-driven architecture. This was my favorite talk of the conference and without further ado, here are the slides. I cannot speak more highly about this presentation so please, have a look at the slides. It was extremely nice to deconstruct the microservices hype and present a realistic case.

It was time for the last talk. The choice was between Antirez’s disque implementation talk and Just Queue it! by Marcos Placona. I decided to give the underdog a chance, given that almost everyone went to Antirez’s presentation (which I am sure it was excellent) and went to Marcos’ presentation instead. I was not disappointed, Marcos described his experience with using MQ while migrating a project and gave another overview of the MQ landscape.

After that, I had some food and some orange juice and decided to call it a day. Overall, it was quite a nice conference, good talks, not a lot of marketing and I will definitely visit the next one, if I am able. Met some interesting people as well and grabbed a lot of pointers for future research. Kudos to the organizers.

See you in DevOps Days Berlin 2015.

Book Review: DevOps Troubleshooting

Hello everyone and welcome back for another book review at woktime. Today’s edition is a short review of a short book called “DevOps Troubleshooting: Linux Server Best Practices”. Without further ado, below is the Table Of Contents

  1. Troubleshooting best practices
  2. Why is the server so slow? Running out of CPU, RAM and Disk I/O
  3. Why won’t the system boot? Solving boot problems
  4. Why can’t write to the disk? Solving full or corrupt disk issues
  5. Is the server down? Tracking down the source of network problems
  6. Why won’t the hostnames resolve? Solving DNS server issues
  7. Why didn’t my email go through? Tracing email problems
  8. Is the website down? Tracking down web server problems
  9. Why is the database slow? Tracking down database problems
  10. It’s the hardware’s fault? Diagnosing common hardware problems

So let’s start at the title. “DevOps” can be an overloaded term – it means different things to different people and unfortunately an “according-to-Hoyle” definition does not exists. I belong in the train of thought that DevOps is more of a cultural movement within an organization than say, a specific job title, so the title of the book “DevOps troubleshooting” is meaningless (I would have strongly preferred the term “Linux Systems Troubleshooting”, as it would have been more accurate for reasons that I am going to explain below).

The author is clearly experienced within the realm of Linux administration and he attempts to cover a broad range of topics. The book is approximately 205 pages long, which means that it will never get too deep within a subject, opting instead to cover as many topics as possible. The writing style of the author is quite readable and he goes out of his way to explain things in relative detail and on the really plus side of the book, there are no glaring errors – proofreaders and the author really did went the extra mile to ensure that content was accurate in the vast number of examples this book is providing.

However, my gripe with the book is that the material covered is really basic. Granted, the intended audience is not a veteran system administrator or engineer – this book by its own admission is aimed towards developers or QA personnel that, owing to some definition of DevOps, are thrown into operational duties. The author makes an effort NOT to use random based troubleshooting, however a complete methodology is never introduced.

Overall, this is a well-written book that provides value to a non-operations member of a team doing operations or for a novice system administrator. Its small size makes it portable enough to be carried around as a level-1 reference, however for system level debugging there are better options out there (keep watching this space for the definite follow up on this sentence).

Book Review: PostgreSQL Replication

Postgresql Replication
Book Cover

So for my series of System Engineering books, I will proceed with a short review of PostgreSQL Replication by Packt. The reason this book came to be a part of my collection is that while there is a lot of information regarding PostgreSQL replication out there, a lot of it is out of date, given the overhaul of the replication system in PostgreSQL 9.X. Without further ado, here is the list of contents of the book.

  • Understanding Replication Concepts
  • Understanding the PostgreSQL Transaction Log
  • Understanding Point-In-Time Recovery
  • Setting up asynchronous replication
  • Setting up synchronous replication
  • Monitoring your setup
  • Understanding Linux High-Availability
  • Working with pgbouncer
  • Working with PgPool
  • Configuring Slony
  • Using Skytools
  • Working with Postgres-XC
  • Scaling with PL/Proxy

    The book gets straight into business with an introduction of replication concepts, and why this is a hard problem that cannot be a one-size fits all solution. Topics such as master-master replication and sharding are addressed as well. After this short introduction, specifics of PostgreSQL are examined, with a heavy focus on XLOG and related internals. The book goes into a nice balanced amount of detail, detailed enough to surpass the trivial level but not overwhelming (and thank $DEITY, we are spared source code excerpts, although a few references would be nice for those that are willing to dig further into implementation details), providing a healthy amount of background information. With that out of the way, a whole chapter is devoted to the topic of Point-In-Time-Recover (PITR for now on). PITR is an invaluable weapon in the arsenal of any DBA and gets a fair and actionable treatise, actionable meaning that you will walk away from this chapter with techniques you can start implementing right away.With the theory and basic vocabulary defined, the book then dives into replication. Concepts are explained, as well as drawbacks of each technique, alongside with specific technical instructions on how to get there, including a Q&A on common issues that you may encounter in the field.

    PostgreSQL has a complex ecosystem and once the actual built-in replication mechanisms are explained, common tools are presented (with the glaring omission of Bucardo unfortunately). This is where the book falters a bit, given the excellent quality of the replication related chapters. The presentation of the tools is not even nor deep in all cases – my gripe is that the Linux-HA chapter stops when it starts to get interesting. Having pointed this out, still these chapters can be better written and more concise than information scattered around in the web. I have paid particular attention to the PgPool chapter, which does not cover PgPool-HA (hint: there is more than one way to do it). These chapters assume no previous exposure to the ecosystem so they serve as a gentle (and again, actionable) introduction to the specific tools but I would have preferred them to be 10-15 pages longer each, providing some additional information, especially on the topic of high-availability. Even as-is, these chapters will save you a lot of time searching and compiling information, filling in a few blanks along the way, so, make no mistake, they are still useful. Bonus points for covering PostgreSQL-XC, which is somewhat of an underdog.

    A small detail is that examples in the book tend to focus on Debian-based systems so if you are administering a Red Hat derivative you should adapt the examples slightly, taking into consideration the differences in the packaging of PostgreSQL. Overall, the book goes for a broad as opposed to deep approach and can server as a more than solid introductory volume. Inevitably, there is an overlap with the official PostgreSQL manuals, which is to be expected given that they are great. The quality of the book is on par with other Packt Publishing titles, making this an easy to read book that will save you a lot of time for certain use cases.

  • Book Review: Web Operations: Keeping the Data on Time

    For my kickoff of systems engineering book reviews I have chosen this book. While not being technical in the strict sense of the term (if you are looking for code snippets or ready-to-use architecture ideas, look elsewhere), this collection of 17 essays provides a birds-eye view of the relatively new principle of Web Operations. As you will see from the short TOC below, no stone is left unturned and broad coverage is given to a range of subject ranging from NoSQL databases to community management (and all the points in between). This is what you will be getting:

    1. Web Operations: The career
    2. How Picnik Uses Cloud Computing: Lessons Learned
    3. Infrastructure and Application Metrics
    4. Continuous Deployment
    5. Infrastructure As Code
    6. Monitoring
    7. How Complex Systems Fail
    8. Community Management and Web Operations
    9. Dealing with Unexpected Traffic Spikes
    10. Dev and Ops Collaboration and Cooperation
    11. How Your Visitors Feel: User-Facing Metrics
    12. Relational Database Strategy and Tactics for the Web
    13. How to Make Failure Beautiful: The Art and Science of Postmortems
    14. Storage
    15. Nonrelational Databases
    16. Agine Infrastructure
    17. Thing That Go Bump in the Night (and How to Sleep Through Them)

    Where can someone starts? Giving a chapter-by-chapter play is not the preferred way – chapters are short and to the point and use a variety of formats – one of them is a long interview for example, so I am going to talk about the overall feel of the book.

    The roll-call of the book is impressive. I am sure that if you worked in the field for a little while, names like Theo Schlossnagle, Baron Schwartz, Adam Jacob, Paul Hammond et al, speak for themselves. Every chapter serves as a gentle introduction to the relevant subject matter – this is to be expected as the topics are quite deep and each one carries a huge assorted bibliography. What I particularly like about this book is not only the gentle introduction, it is also written in a way that makes in approachable to technical managers, team leaders and CTOs – chapters such as the one on postmortems and the ones on metrics are prime examples of this. What is awesome is that the book helps you identify problem areas with your current business (for example the lack of using configuration management such as Puppet or Chef) and provide you with actionable information. Extra points for openly acknowledging failure, there are more than two chapters related to it (as the saying goes, if you operate at internet scale, something is always on fire, someplace), including a chapter on how to conduct efficient postmortems. Even non-technical areas such as community management are covered, illustrating that not everything is technology oriented only in the area of running an internet business today.

    Your experience with this book will greatly vary. If you are new to the topics at hand, then you might benefit by reading each and every chapter of the book and then revisit it from time to time – as your experience grows, so the number of useful ideas you can get out of this book will increase too. If you are an experiences professional, while this book might not be an epiphany, there is still useful content to apply and perhaps a few additional viewpoints might present themselves.

    Overall? An excellent book for everyone involved in running an internet business with a lot of value and a long shelf life.

    A final nice point is that proceedings from this book go to a charity, that is a nice touch.

    Coming up on Commodity

    For the past few months I have been silent, with the last entry being a re-blog from xorl’s (defunct?) blog. That is quite a long time for a writer’s block, eh? Well, here is some insight: professionally I have somewhat moved away from security to towards a systems engineering paradigm. While security still plays an important part both professionally and on my personal time, it is not the dominant focus. Building systems engineering skills is hard work, especially of focus on the engineering part as opposed to the systems part (e.g. systems administrator and systems engineer should not be interchangeable terms). My plan is to publish reviews of books and other resources that I found helpful during my journey, as well as some original hacks that I have made. I have a strict policy of not posting stuff related to $DAYJOB but I am more than willing to share some nuggets of experience. So stay tuned and say hi to the revitalized Commodity blog!

    Rediscovery and Security News

    First things first: Happy 2012 everyone.

    So, this blog has been silent for a little while now. More astute readers might argue along the lines of “hey man! This is supposed to be a technical blog – where are all them technical articles? Have you ran out of material?”.

    Take a deep breath, the dreaded, almost compulsory metablogging block after a long pause is coming …

    The answer is a big NO! There is an abundance of material that I am proud of BUT a lot of this research has been done while solving problems for paying clients. The problem can be refined as “how do you tip-tap-toe around NDAs and do you choose to do so?”. Smart money says not to do it, so I am not. Keep this point in mind for the latter part of this post.

    One of the design decisions for this rebooted blog was that it should confer an era of positivity, at least by security and research standards, which is not the happiest of domains. So, for better or worse, I decided to bottle the acid for some time, even if that meant leaving gems such as the following (courtesy of a well known mailing list) untouched:

    I have problems with those that create malware – under the guise of
    “security research” – which then gets used by the bad guys.

    I’m not saying that one can never stop breaking into things. I just
    don’t like the glorification of creating malware by the so-called
    “good guys”. If all of that energy instead was placed into prevention,
    then we would be better off.

    [SNIP]
    P.S. One might argue that a whitehat or security researcher can’t
    change sides and go into prevention, or in other words, be a Builder
    instead of a Breaker. They can’t because they don’t have the skills to
    do it.

    Finished picking your jaw off the floor? Good! While Cpt. Obvious is on its way with the usual “vuln != exploit != malware” reply, let’s get things moving with a pet peeve of mine that I have not seen addressed.

    Almost every time a new security trend comes out, there is nary a hint that this might have been discovered some place else or sometime before. Given that security overlaps a lot with cryptography, I just cannot get around my head around the fact while rediscovery is a well accepted notion within the cryptography field (and this has been proved time and time and time again) that while something you are “discovering” might have been discovered (and countered!) before.

    Enter infosec, an ecosystem where NDAs are ten-a-penny, the underground is more tight-lipped than ever, the general consensus is that confidentiality is a necessity and where a lot of “discoveries” are handled either via the black-market (and lack of morals implied therein) or via security brokers. It was all fine and dandy but the introduction of both fame-seeking researchers and “researchers” as well the very fact that infosec makes for entertaining and sensationalist headlines that actually “sell seats in the audience” and everyday we are constantly bombarded with “news” and “research” (use of quotes intentional if you haven’t guessed already) where it can fall into one of the following categories:

  • News from the obvious department. This one is getting more and more annoying lately but it is much too obvious a target
  • Less obvious stuff that falls below the radar of cargo-cult security but still way more likely to have been encountered in the field by serious practitioners who fall into one of the non-disclosure categories listed above
  • Actual new and/or insightful findings, which tend to be lost within the sea of useless information, the stuff that REALLY makes your day
  • Since there is a very fine line between 2 and 3 (again, 1 is way too easy of a target to make fun of or suggest anything) and one can never be sure in such a rapidly and secretive landscape, for the love of $DEITY, next time see something related to infosec findings, keep in the back of your head that this might be a rediscovery and dear reporters, PLEASE DROP THE SENSATIONAL HEADLINES.

    I am not holding my breath that this will ever happen but one can only hope …

    PS:
    Finally, an image courtesy of blackhats.com infosuck webcomic. Not exactly the point that I am trying to convey but the message is quite similar and in any case it is much too funny to be left outside the party.

    P For Paranoia OR a quick way of overwriting a partition with random-like data

    (General Surgeon’s warning: The following post contains doses of paranoia which might exceed your recommended daily dosage. Fnord!).

    A lot of the data sanitisation literature around advises overwriting partitions with random data (btw, SANS Institute research claims that even a pass with /dev/zero is enough to stop MFM but YPMV). So leaving Guttman-like techniques aside, in practice, generation of random data will take a long time in your average system which does not contain a cryptographic accelerator. In order to speed up things, /dev/urandom can be used in lieu of /dev/random, noting that when read, the non-blocking /dev/urandom device will return as many bytes as are requested, even if the entropy pool is depleted . As a result, the result stream is not as cryptographically sound as /dev/random but is faster.

    Assuming that time is of the essence and your paranoia level is low there is an alternative which you can use, both providing random-like data (which means you do not have to fall back to /dev/zero and keep fingers crossed) and being significantly faster. Enter Truecrypt. Truecrypt allows for encrypted partitions using a variety of algorithms that have been submitted to peer review and are deemed secure for general usage. I can hear Johnny sceptical shouting “Hey wait a minute now, this is NOT random data, what the heck are you talking about?”. First of all, Truecrypt headers aside, let’s see what ent reports. For those of you not familiar with ent, it is a tool that performs a statistical analysis of a given file (or bitstream if you tell it so), giving you an idea about entropy and other way way useful statistics. For more information man 1 ent.

    For the purposes of this demonstration, I have created the following files:

  • an AES encrypted container
  • an equivalent size file getting data from /dev/urandom (I know, but I was in a hurry )
  • a well defined binary object in the form of a shared library
  • a system configuration file
  • a seed file which contains a mixture of English, Chinese literature, some C code, strings(1) output from the non-encrypted swap (wink-wink, nudge-nudge)
  • Let’s do some ent analysis and see what results we get (for the hastily un-strict compliant Perl code look at the end of the article)

    ################################################################################
    processing file: P_for_Paranoia.tc 16777216 bytes
    Entropy = 7.999988 bits per byte.

    Optimum compression would reduce the size
    of this 16777216 byte file by 0 percent.

    Chi square distribution for 16777216 samples is 288.04, and randomly
    would exceed this value 10.00 percent of the times.

    Arithmetic mean value of data bytes is 127.4834 (127.5 = random).
    Monte Carlo value for Pi is 3.141790185 (error 0.01 percent).
    Serial correlation coefficient is 0.000414 (totally uncorrelated = 0.0).

    ################################################################################
    processing file: P_for_Paranoia.ur 16777216 bytes
    Entropy = 7.999989 bits per byte.

    Optimum compression would reduce the size
    of this 16777216 byte file by 0 percent.

    Chi square distribution for 16777216 samples is 244.56, and randomly
    would exceed this value 50.00 percent of the times.

    Arithmetic mean value of data bytes is 127.4896 (127.5 = random).
    Monte Carlo value for Pi is 3.143757139 (error 0.07 percent).
    Serial correlation coefficient is -0.000063 (totally uncorrelated = 0.0).

    ################################################################################
    processing file: seed 16671329 bytes
    Entropy = 5.751438 bits per byte.

    Optimum compression would reduce the size
    of this 16671329 byte file by 28 percent.

    Chi square distribution for 16671329 samples is 101326138.53, and randomly
    would exceed this value 0.01 percent of the times.

    Arithmetic mean value of data bytes is 82.9071 (127.5 = random).
    Monte Carlo value for Pi is 3.969926804 (error 26.37 percent).
    Serial correlation coefficient is 0.349229 (totally uncorrelated = 0.0).

    ################################################################################
    processing file: /etc/passwd 1854 bytes
    Entropy = 4.898835 bits per byte.

    Optimum compression would reduce the size
    of this 1854 byte file by 38 percent.

    Chi square distribution for 1854 samples is 20243.47, and randomly
    would exceed this value 0.01 percent of the times.

    Arithmetic mean value of data bytes is 86.1019 (127.5 = random).
    Monte Carlo value for Pi is 4.000000000 (error 27.32 percent).
    Serial correlation coefficient is 0.181177 (totally uncorrelated = 0.0).

    ################################################################################
    processing file: /usr/lib/firefox-4.0.1/libxul.so 31852744 bytes
    Entropy = 5.666035 bits per byte

    Optimum compression would reduce the size
    of this 31852744 byte file by 29 percent.

    Chi square distribution for 31852744 samples is 899704400.21, and randomly
    would exceed this value 0.01 percent of the times.

    Arithmetic mean value of data bytes is 74.9209 (127.5 = random).
    Monte Carlo value for Pi is 3.563090648 (error 13.42 percent).
    Serial correlation coefficient is 0.391466 (totally uncorrelated = 0.0).

    Focusing on entropy, we see that
    Truecrypt: Entropy = 7.999988 bits per byte.
    /dev/urandom: Entropy = 7.999989 bits per byte.

    which are directly comparable (if you are trusting ent that is) and much better than a well structured binary file (5.666035 bits per byte) and heads and shoulders our seed.txt results (which is a conglomerate unlikely to be encountered in practice). Chi-square entropy distribution values are different by a factor of 5 in our example, in favor of /dev/urandom data, which is still way more than the data encountered in our other test cases.

    From the above, there is strong indication that when you need random-like data and /dev/urandom is too slow (for example, as I will elaborate on an upcoming post), for example when you want to “randomize” your swap area, a Truecrypt volume will do in a pinch.

    #!/usr/bin/env perl
    use warnings;
    use File::stat;
    # a 5 min script (AKA no strict compliance) to supplement results for a blog article
    # why perl? Nostalgia :-)

    @subjects = qw(P_for_Paranoia.tc P_for_Paranoia.ur seed /etc/passwd /usr/lib/firefox-4.0.1/libxul.so);
    sub analyzeEnt {
    my($file) = @_;
    my $sz = stat($file)->size;
    my $ent = `ent $file` ."\n";
    print "#" x 80 . "\nprocessing file: $file ". $sz ." bytes\n".$ent;
    }
    foreach my $subject (@subjects) {
    &analyzeEnt($subject);
    }