Last updated: 29 September 2012 22:17
All times are UTC.

Powered by:

Planet LCA2010

26 September 2012

Stewart Smith

New libeatmydata release (65): MacOS X 10.7 fixes

This release incorporates contributions from Blair Zajac to fix issues on MacOS X 10.7.

You can get the source tarball over on the launchpad page for the release or directly from my web site:

by Stewart Smith at 26 September 2012 07:14

Sage Weil

v0.52 released

After several weeks of testing v0.52 is ready!  This is a big release for RBD and radosgw users.  Highlights include:

  • librbd: fully functional and documented image cloning
  • librbd: image (advisory) locking
  • librbd: ‘protect’/'unprotect’ commands to prevent clone parent from being deleted
  • librbd: ‘flatten’ command to sever clone parent relationship
  • librbd: a few fixes to ‘discard’ support
  • osd: several out of order reply bug fixes
  • msgr: improved failure handling code
  • auth: expanded authentication settings for greater flexibility
  • mon: ‘report’ command for dumping detailed cluster status
  • mon: throttle client messages (limit memory consumption)
  • mon: more informative info about stuck PGs in ‘health detail’
  • osd, mon: use feature bits to lock out clients lacking CRUSH tunables when they are in use
  • radosgw: support for swift manifest objects
  • radosgw: support for multi-object deletes
  • radosgw: improved garbage collection framework
  • rados: bench command now cleans up after itself
  • ceph.spec: misc packaging fixes

The big news in this release is that the new RBD cloning functionality is fully in place.  This includes the ability to take a base image (snapshot) and instantly ‘clone’ it to other images.  The typical use case is cloning a base OS install image for each VM, allowing you to immediately boot them up without waiting for any data to copy.  RBD also got advisory locking support, which allows users to (cooperatively) control who is using each image and avoid situations where multiple hosts write to the same image and corrupt the file system.  There is additional integration work on the roadmap that will make this easier to use, but all of the pieces are in place for users to starting taking advantage of it now.

This release also includes several improvements to the radosgw.  On the user-facing API side this includes support for Swift ‘manifest’ objects (large objects uploaded in pieces) and support for multi-object delete.  On the administrative side, there is a new garbage collection framework that makes the cleanup of deleted objects transparent, automatic, and efficient.  (Currently a radosgw-admin command run from something like cron is necessary to clear out old data.)

On the release side, this is also the first release for which we are building RPMs.  Hooray!  We’re starting with just CentOS6/RHEL6 and Fedora 17 on x86_64, but will be adding additional distributions for v0.53, including OpenSUSE and Fedora 18.  If there is a particular RPM-based distro that you’d like to see us build packages for, please let us know!

You can get v0.52 from:

by sage at 26 September 2012 03:41

23 September 2012

Selena Deckelmann

Wrapping up Postgres Open, new job, shift away from twitter

Last week in Chicago was amazing! 37 speakers, an incredible keynote by Jacob Kaplan-Moss (video coming soon!) and re-connecting with all the great people in Chicago. We announced a new conference committee for next year’s conference, and said we’d do it again in September in Chicago! That group is just getting started now, and will have some announcements for everyone in the coming weeks.

I’m going to be busy with a new job at Mozilla starting Monday, working on databases with the WebTools team.

Another small change is: I’m writing a few times a day to my tumblr and I’ve just stopped using twitter for the next few weeks. In the last day, I have really only thought about one or two things to share that would have been more than fleetingly useful. As I come across things, I’ll be sending them to the tumblr instead.

I’m also looking for patches and projects to work on for Postgres itself. During Thursday’s code sprint, I picked up an old patch for config directories, and today I spent some time re-generating a list of contributor names for the last 5 major versions of Postgres.

As usual, I feel so energized from hanging out with my favorite Postgres people. I’m only sad that I won’t see most of them in person again until next year.

by selena at 23 September 2012 03:45

22 September 2012

Bob Brown

Suspend laptop when lid is closed and power is removed

I changed my power settings on Ubuntu (although that is probably irrelevant for this topic) to sleep when the lid is closed when on battery but not to sleep when the lid is closed and the laptop is powered by AC.

Then, with the laptop on AC and the lid closed (i.e. running normally) I unplugged the AC and put the laptop into my laptop bag and went off to do some things in town.

An hour later I pulled my laptop out of the bag and the hot plastic smell was alarming – fortunately the laptop appears to be OK but this isn’t an episode I want to repeat. I figured that since Linux is quite configurable and is often driven by scripts that are run when certain events occur this issue should be preventable.

While I didn’t find a direct solution via Google I did know enough to eventually find what I wanted. I present this solution as a result. This method appears to work for Ubuntu 12.04 – it may work on other things as well. I didn’t have to install anything in particular to make this work but it does use the pm-utils (power management utils) stuff which may or may not be part of your distribution:

Also I’m not sure how robust the method of determining the lid state is (e.g. what if you have a LID0 and a LID1????). Someone said they wanted to see my laptop if it had a LID0 and a LID1 :)

Anyway, I created a file here : /usr/lib/pm-utils/power.d/suspend-when-ac-removed-while-lid-closed and made sure it was executable (i.e. chmod a+x /usr/lib/pm-utils/power.d/suspend-when-ac-removed-while-lid-closed)

Now for the script:

suspend() {
    # Check lidstate - open = 1, closed = 0
    LIDSTATE=`cat /proc/acpi/button/lid/LID0/state|grep open|wc -l`
    case "$LIDSTATE" in
        0) pm-suspend ;;
        *) echo Unknown $LIDSTATE ;;
case $1 in
    true) suspend ;;
    false) exit $NA ;;
    *) exit $NA ;;
exit 0

I didn’t need to restart pm-utils or any services, this just started working when I dropped the file in. To test:

  1. Configure Ubuntu to not suspend when you shut the lid and there is AC power.
  2. Shut the lid of your laptop (it should remain on).
  3. Remove the AC.
  4. The laptop should suspend.
  5. Open the lid and unlock the screen.
  6. Remove the execute flag (effectively disabling the script), chmod a-x /usr/lib/pm-utils/power.d/suspend-when-ac-removed-while-lid-closed
  7. Shut the lid and remove AC.
  8. The laptop should stay on.

In my testing over the last few days this appears to work all of the time – still after the heating episode I do check each time to see that it does sleep.

by GuruBob at 22 September 2012 04:58

21 September 2012

Scott James Remnant

Book Review: Redshirts


Redshirts has a concept that no Sci-Fi geek could ever pass up an opportunity to read. In the future, newly assigned starship crew realize that for no apparent reason the captain always takes a junior crew member on away teams, and that crew member dies every time.

And indeed this book starts off well exploring this idea from the point of view of the newly arrived ensigns, with plenty of tropes and references to delight the geek reader.

“In other words, crew deaths are a feature, not a bug,” Cassaway said, dryly.

Unfortunately it then doesn’t seem to know where to go, turning to another trope as the crew go back in time to our present day to find the actors playing them in a TV series; and finally ends weirdly with a third of the pages of the book left to go and a series of codas that don’t really seem to fit the original narrative.


by scott at 21 September 2012 18:26

20 September 2012

Sage Weil

v0.48.2 ‘argonaut’ stable update released

Another update to the stable “argonaut” series has been released. This fixes a few important bugs in rbd and radosgw and includes a series of changes to upstart and deployment related scripts that will allow the upcoming ‘ceph-deploy’ tool to work with the argonaut release.


  • The default search path for keyring files now includes /etc/ceph/ceph.$name.keyring. If such files are present on your cluster, be aware that by default they may now be used.
  • There are several changes to the upstart init files. These have not been previously documented or recommended. Any existing users should review the changes before upgrading.
  • The ceph-disk-prepare and ceph-disk-active scripts have been updated significantly. These have not been previously documented or recommended. Any existing users should review the changes before upgrading.

Notable changes include:

  • mkcephfs: fix keyring generation for mds, osd when default paths are used
  • radosgw: fix bug causing occasional corruption of per-bucket stats
  • radosgw: workaround to avoid previously corrupted stats from going negative
  • radosgw: fix bug in usage stats reporting on busy buckets
  • radosgw: fix Content-Range: header for objects bigger than 2 GB.
  • rbd: avoid leaving watch acting when command line tool errors out (avoids 30s delay on subsequent operations)
  • rbd: friendlier use of pool/image options for import (old calling convention still works)
  • librbd: fix rare snapshot creation race (could lose a snap when creation is concurrent)
  • librbd: fix discard handling when spanning holes
  • librbd: fix memory leak on discard when caching is enabled
  • objecter: misc fixes for op reordering
  • objecter: fix for rare startup-time deadlock waiting for osdmap
  • ceph: fix usage
  • mon: reduce log noise about check_sub
  • ceph-disk-activate: misc fixes, improvements
  • ceph-disk-prepare: partition and format osd disks automatically
  • upstart: start everyone on a reboot
  • upstart: always update the osd crush location on start if specified in the config
  • config: add /etc/ceph/ceph.$name.keyring to default keyring search path
  • ceph.spec: don’t package crush headers

You can get this release from the usual locations:

by sage at 20 September 2012 16:55

Stewart Smith

Impact of MySQL slow query log

So, what impact does enabling the slow query log have on MySQL?

I decided to run some numbers. I’m using my laptop, as we all know the currently most-deployed database servers have mulitple cores, SSDs and many GB of RAM. For the curious: Intel(R) Core(TM) i7-2620M CPU @ 2.70GHz

The benchmark is going to be:
mysqlslap -u root test -S var/tmp/mysqld.1.sock -q 'select 1;' --number-of-queries=1000000 --concurrency=64 --create-schema=test

Which is pretty much “run a whole bunch of nothing, excluding all the overhead of storage engines, optimizer… and focus on logging”.

My first run was going to be with the slow query log on. I’ll start the server with as it’s just easy:
eatmydata ./ --start-and-exit --mysqld=--slow-query-log --mysqld=--long-query-time=0

The results? It took 18 seconds.

How long without the slow query log (starting with again, but this time without any of the extra mysqld options)? 13 seconds.

How does this compare to a Drizzle baseline? On a freshly build Drizzle trunk, using the same mysqlslap binary I used above, connecting via UNIX socket: 8 seconds.

by Stewart Smith at 20 September 2012 05:42

18 September 2012

Pia Waugh

Creating Open Government (for a Digital Society)

Recently I spoke on a panel at the NSW Information Commissioner’s “Creating Open Government” forum about my thoughts on blue sky ideas in this space. I decided to work on the assumption of the importance and need for creating open government for a digital society. In the 10 mins I had, I spoke on the pillars of public engagement, citizen centric services and open data, where we need to go in the open government movement, and a few other areas that I believe are vital in creating open government.

Below are some of the thoughts presented (in extended form), some cursory notes, and some promises to write more in the coming months :)

I should say up front that I am a person who believes government has an important role to play in society, even in a highly connected, digitally engaged and empowered society. Government, done right, gives us the capacity to support a reasonable quality of life across the entire society, reduce suffering and provide infrastructure and tools to all people so we can, dare I say it, live long and prosper. All people are not equal, there is a lot of diversity in the perspectives, skills, education, motivation and general capability throughout society. But all people deserve the opportunities in their life to persue dignity, happiness and liberty. I believe government, done right, facilitates that.

In my mind government provides a way to scale infrastructure and services to support individuals to thrive, whatever the circumstancecs of their birth, and facilitate a reasonably egalitarian society – as much as can be realistically achieved anyway. I’m very glad to live in a country where we broadly accept the value of public infrastructure and services.

So below are some thoughts on next steps in creating open government, with additional references and reading available :)

1) Online Public Engagement

There is generally a lot of movement to engage online by the public sector across all spheres of government in Australia. However, this tends to be the domain of media and comms teams, which means the engagement is often more about messaging and trying to represent/push the official narrative. There are a lot of people working in this space who say they are not senior enough to have a public profile or to engage publicly without approval and yet, we have many people in government customer support roles who engage with the public every day as part of their job.

I contend that we need to start thinking about social media and “public engagement” also as a form of customer support, and not just media and communications. In this way, public servants can engage online within their professional capacities and not have to have every tweet or comment vetted, in the same way that every statement uttered by a customer service officer is not pre-approved. In this way interactions with citizens become of higher value to the citizen, and social media becomes another service delivery mechanism.

For example, consider how many ISPs are on social media, monitoring mentions of them, responding with actual customer support and service that often positively impacts that persons experience (and by extension community perception) of the organisation. Government needs to be out there, where people are, engaged in the public narrative and responsive to the needs of our community. We need our finger on the pulse so we can better respond to new challenges and opportunities facing government and the broader community.

One of the main challenges we face is the perception from many people that there is little be gained through public engagement. If a department or agency embarks upon a public consultation without genuinely being interested in the outcomes, this is blindingly obvious to participants, and is met with disdain. It is vital that government invest in online community development skills and empower individuals throughout the public service to engage online in the context of their professional roles.

This online engagement development skillset can be deployed for specific consultations or initiatives, but it also vital on an ongoing basis to maintain a constructive narrative, tone and community that contributes on an ongoing basis.

Further reading:

2) Citizen Centric Services

Citizens don’t care about the complexities of government, and yet we continue to do service delivery along departmental lines and spheres of government. The public service structure is continually changing to match the priorities of the government of the day, so not only is it confusing, but it is everchanging and we end up spending a lot of effort changing websites, stationery and frontline branding each portfolio shuffle. The service delivery itself (usually) continues seamlessly regardless of shifts in structure, but it is hard for citizens to keep up, and nor should they be expected to.

Citizen centric services is about having a thematic and personalised approach to service and information delivery. Done well, this enables a large number of our population to self service, in the manner and at the time that is convenient to them. It is no small task to achieve as it requires a way to integrate (or perhaps sit in between) systems and data sources throughout all of government, but we have some established case studies in Australia that we can learn from. Rather than trying to get consistent systems across government – which leads to always being only as strong as your weakest link – it is feasible to have integration tools to “front end” government.

By enabling many citizens to effectively self service, this approach also frees up government resources for supporting our most vulnerable and complex cases.

It is worth also noting that a truly citizen centric approach would be both cross departmental *and* cross jurisdictional. We need to start asking and addressing the challenges around how can we collaborate across the three spheres of government to give citizens a seamless experience?

A more eloquent description of this concept is from a speech from my former boss, Minister Kate Lundy, from a speech entitled Citizen-centric services: A necessary principle for achieving genuine open government/

3) Proactive data disclosure – open data and APIs

The public service holds and creates a lot of data in the process of doing our job. By making data appropriately publicly available there are better opportunies for public scrutiny and engagement in democracy and with government in a way that is focused on actual policy outcomes, rather than through the narrow aperture of politics or the media. This also builds trust, leads to a better informed public, and gives the public service an opportunity to leverage the skills, knowledge and efforts of the broader community like never before.

Whether it be a consultation on service planning or a GovHack, an open and contextualised approach to data and indeed the co-production of policy and planning ends up being a mechanism to achieve the most evidence based, “peer reviewed” and concensus driven outcomes for government and the community. It gets citizens directly engaged in actual policy and planning, and although the last word is always ultimately with the relevant Minister, it means that where political goals don’t align with the evidence based policy recommendations, an important discussion can be had and questions asked from an engaged and informed public.

This, to me, is a real and practical form democracy. I feel that party politics actually gets in the way to some degree, as it turns people off engaging in the most important institute in their lives. Like a high stakes team sport, the players are focused on scoring goals against their opponents and forget about what is happening off field.

As a person who is working in the public service, I truly believe that transparency is our best defence in fulfilling our duty to serve the public.

With major changes to legislation in recent years making FOI more seamless and accessible to citizens, departments are struggling to allocate necessary resources to comply in an extremely fiscally conservative environment.

In the meantime, although there is a general concensus on the value (with admittedly sometimes quite different interpretations of value) of opening up more public sector information publicly, the fact is that it is largely seen as a “good” thing to do, a nice to have, and as such has been challenging for departments to justify the not-insignificant resources required to move to a proactive data disclosure status quo.

There is a decent argument to say that proactively publishing data (and indeed, reports) would help mitigate the rising costs of FOI as departments could point requests to where the information is already online. But realistically, unless the department had in place the systems to automate proactive publishing, then it will remain something done after the fact, not integrated into business as usual, ad hoc and an ongoing expense that is too easily dropped when the budget belt tightens.

I have people say to me all the time “just publish the data, it’s easy”. The funny thing is the vast majority of people have little to no experience actually doing open data in government. It is quite a new area and though the expertise is growing, we are in infancy stages in jurisdictions around the world. Even some jurisdictions with very large numbers of data sets are doing much of that work manually, the data becoming out of date quickly, and quite often the pressure to be seen to do open data overrides the quality and usefulness of the implementation, as we see datasets being broken down into multiple uploads to meet quantitative KPIs.

The truth is, although putting up some datasets here or there is relatively easy – there is a lot of low hanging fruit – to move to a sustainable, effective, automated and systematic approach to open data is much harder, but is the necessary step if we are to see real value from open data, and if we are to see the goals of open data and mitigating FOI cost compliance merge.

Interestingly, another major benefit of the proactive public publishing of government data, is that the process of ensuring a dataset complies with privacy and other obligations is quite similar between making something public and sharing across departments at all. By making more government data openly available, particularly when combined with some analysis and visualisation tools, we will be able to share data across departments in an appropriate way that helps us all have better information to inform policy and planning.

The good news is, in Australia we have the policies (OAIC, AG Principles of IP, Ahead of the Game, Gov 2.0 Taskforce Report, etc), legislative (FOI changes),and political cover (Declaration of Open Government, though more would be useful) to move on this.

I will be doing a follow up blog post about this topic specifically in the coming week after I attend a global open data conference where I intend on researching exactly how other jurisdictions are doing it, their processes, resourcing, automation and procurement requirements. I will also give some insights to what the dataACT team have learnt in implementing Australia’s first actual open data platform, which is an important next step for Australia building on the good work of AGIMO with the pilot.

Additional notes:

  • more effective and efficient government – shared across departments, capacity to have whole of gov business intelligence and strategic planning, capacity to identify trends, opportunities and challenges within public service
  • internal measuring, monitoring, reporting and analysis – government dashboards – both internal and public reporting on projects
  • innovation – public and private innovation through access to data, service APIs – gov can build on public innovation for better service delivery – eg GovHack
  • transparency – need to build trust, what is the value to gov? – eg of minister vs doctor example

4) Agile iterative policy

There is a whole discussion to have about next generation approaches to policy which would be iterative, agile, include actual governance to keep the policy live and responsive to changing circumstances, and the value of live measuring, monitoring and analysis tools around projects and policies to help with more effective implementation on an ongoing basis and to applying the learning from implementation back into the policy.

The basic problem we have in achieving this approach is that, structurally, there is generally no one looking at policy from an end to end perspective. The policy makers are motivated to complete and hand over a policy. The policy implementers are motivated to do what they are handed. We need to bridge this gap between policy makers and doers in government to have a more holistic approach that can apply the lessons learnt from doers into strategic planning and development on an ongoing basis.

I’ll further address this in a followup blog post next month as I’m pulling together some schools of thought on this at the moment.

Check out the APS Policy Visualisation Network which is meeting for the first time next week if this space interests you. It will be fascinating to have people across the APS discussing new and interesting approaches to policy, and hopefully we will see the build up of new skills and approaches in this area.


  • iterative and adaptive policy – gone are the days of a static 10 year policy, we need to be feeding recommendations from testing, monitoring, measuring back into improving the policy on an ongoing basis.
  • datavis for policy “load testing”, gleaning new knowledge, better communication of ideas, visualisation networks for contextualisation, etc
  • co-production, co-design
  • evidence based, peer reviewed policy that draws on the diverse strengths throoughout our community and public service

5) Supporting Digital industries

There are many reasons why, as a society we need to have strong digital industries including IT, creative, cultural, games development, media, music, film and much more. Fundamentally these industries and skills underpin our success in all other industries to some extent, but also, we have seen many Australian digital companies have to go overseas to survive, and we need to look at the local market and environment and ask how we can support these companies to thrive in Australia.

I ran two major consultations about this over the past couple of years, and the outcomes and contributions are still very relevant:

  • The ICT and Creative Industries Public Sphere – included an excellent contribution paper from Silicon Beach, a group of Australian tech entrepreneurs who have exceptional insights to the sectors here and overseas.
  • The Digital Culture Public Sphere – included excellent contributions from the games development industry, digital arts, the digital culture (GLAM) sector and much more.


  • Open government can contribute to our digital sector through:
    • open data – esp cultural content for which we are custodians and esp the large quantity of data which is out of copyright
    • being great users of and contributers to digital technologies and the Australian sector
    • focused industry development strategies and funding for digital sectors

6) Emerging Technologies

I finished my panel comments by reflecting on some emerging technologies that governments need to be aware of in our planning for the future.

These are just some new technologies that will present new opportunities for government and society:

  • 3D printing and nanotechnology – already we have seen the first 3D printed heart which was successfully transplanted.
  • Augmented Reality
  • Wearable computing and “body hacking”

On the topic of 3D printing, I would like to make a bold statement. You see, at the moment people are already trying to lobby against 3D printing on the basis that it would disrupt current business models. Many on the technology side of the argument try to soften the debate by saying it is early days, you don’t get perfect copies, and myriad other placations. So here it is.

3D printing will disrupt the copyright industry, but it will also disrupt poverty and hunger. As a society, we need to decide which we care about more.

There is no softly softly beating around the bush. There are some hard decisions and premises that need to be challenged, otherwise we will maintain the status quo without having even been aware of an alternative.

With advancements in nanotechnology also looming, we could see perfect copies of pretty much anything, constructed atom by atom out of waste for instance.

But there are also many existing technologies that can be better utilised:

  • games development – we have some of the most highly skilled games developers in the world and we can apply these skills to serious issues for highly citizen centric and engaging outcomes.
  • cloud – current buzzword – presents some good opportunities but also a jurisdictional nightmare so tread very carefully. You need to assume anything in the “cloud” can disappear or be read by anyone in the world
  • social media – see point 1

7) Final comment on government, power and society

Finally, just a couple of words about the most important element in creating open governments that can service the needs of an increasingly digital society.

We need to dramatically shift out thinking about technology and what it means to government. An no I don’t mean just getting a social media strategy.

For anything we think, plan, strategise, hypothesise or talk about to become real, we inevitably use a number of technologies.

Most people treat technology like a magic wand that can materialise whatever we dream up, and the nicely workshopped visions of our grand leaders are generally just handballed to the bowels of the organisation, otherwise known as the IT department, to unquestionably implement as best they can.

Technology, and technologists, are seen to be extremely important in the rhetoric, but treated like a cost centre in practise with ever increasing pressure to do more with less, “but could you just support my new iPad please?” IT Managers are forced to make technology procurement decisions based on which side of the ledger the organisation can support today, and the fiscal pressures translate to time pressures which leaves no space for meaningful innovation or collaboration.

We need the leaders of government, especially throughout the public service to be comfortable with and indeed well informed about technology.

We need collaborative technologists in the strategic development process, as we are the best people positioned to identify new opportunities and to help make a strategic vision that has a chance of seeing daylight.

We need to stop using the excuse that innovation or open government “isn’t about technology”, and recognise that as a government, and as a society, need to engage a healthy balance of skills across our entire community to co-design the future of government together. And we need to recognise that if we don’t have technologists in that mix, then all our best intentions and visions will simply not translate into reality.

Monitor, measure, analyse, collaborate, co-design, and be transparent.

The future is here, it simply isn’t widely distributed yet – William Gibson

Notes from after my speech from the event

The NSW AG speech was excellent – he spoke about the primary difference between the NSW Government Information (Public Access) Act 2009 and the old approach is that the current approach pushes for proactive disclosure.

He mentioned three significant aspects of GIPA:

  • Accessibility
  • Manner in which is enables participation
  • Public right to know is paramount

There was an important comment from the day that we need to address sustainable power if we are to build a vision for the future.

There was comment on public interest – public consultation, get best inputs, peer review, chose most evidence based approach.

Questions about cloud:

  • Cloud attributes – jurisdiction, privacy, ownership, enforcability of contract, data transportability
  • Functional categorisation – private data? criticality of data/service delivery?
  • PATRIOT Act implications

AGIMO have some good policy advice in this area worth looking at on their Cloud computing page.

I wrote a hopefully useful post on this a while back called Cloud computing: finding the silver lining. I will be following that up with some work I’m doing in gov atm around this topic a little later, looking at the specific attributes of cloud services, how they map to different things gov want to do, and the fact that government jurisdictions around the world are pretty much universally using what I call “jurisdictional cloud” services, which means they are hosted by gov, or by gov owned entities within their legal jurisdiction. The broad calls for government to “just go cloud” suggest a binary approach of ‘to cloud or not to cloud’ which is simply not reality, not a reasonable thing to expect when government has obligations around privacy, security, sovereignty, ensuring SLAs for service delivery to citizens, and much more.

I also did an interview with Vivek Kundra (prior CTO for US Federal Gov) a while back which will be useful to a lot of people.

I loved the five verbs of Open Gov by Allison Hornery: Start, Share, Solve, Sync, Shout. Her speech was great!

Also, Martin Stewart-Weeks talked about three principles of open government:

  1. partly in cathedrals and partly in bazaars
  2. new relationships between institutions and communities, and
  3. knowledge has become the network. Great presentation and interesting how open source ideas are so prolific in this space.

by pipka at 18 September 2012 03:17

Silvia Pfeiffer

What is “interoperable TTML”?

I’ve just tried to come to terms with the latest state of TTML, the Timed Text Markup Language.

TTML has been specified by the W3C Timed Text Working Group and released as a RECommendation v1.0 in November 2010. Since then, several organisations have tried to adopt it as their caption file format. This includes the SMPTE, the EBU (European Broadcasting Union), and Microsoft.

Both, Microsoft and the EBU actually looked at TTML in detail and decided that in order to make it usable for their use cases, a restriction of its functionalities is needed.


The EBU released EBU-TT, which restricts the set of valid attributes and feature. “The EBU-TT format is intended to constrain the features provided by TTML, especially to make EBU-TT more suitable for the use with broadcast video and web video applications.” (see EBU-TT).

In addition, EBU-specific namespaces were introduce to extend TTML with EBU-specific data types, e.g. ebuttdt:frameRateMultiplierType or ebuttdt:smpteTimingType. Similarly, a bunch of metadata elements were introduced, e.g. ebuttm:documentMetadata, ebuttm:documentEbuttVersion, or ebuttm:documentIdentifier.

The use of namespaces as an extensibility mechanism will ascertain that EBU-TT files continue to be valid TTML files. However, any vanilla TTML parser will not know what to do with these custom extensions and will drop them on the floor.

Simple Delivery Profile

With the intention to make TTML ready for “internet delivery of Captions originated in the United States”, Microsoft proposed a “Simple Delivery Profile for Closed Captions (US)” (see Simple Profile). The Simple Profile is also a restriction of TTML.

Unfortunately, the Microsoft profile is not the same as the EBU-TT profile: for example, it contains the “set” element, which is not conformant in EBU-TT. Similarly, the supported style features are different, e.g. Simple Profile supports “display-region”, while EBU-TT does not. On the other hand, EBU-TT supports monospace, sans-serif and serif fonts, while the Simple profile does not.

Thus files created for the Simple Delivery Profile will not work on players that expect EBU-TT and the reverse.

Fortunately, the Simple Delivery Profile does not introduce any new namespaces and new features, so at least it is an explicit subpart of TTML and not both a restriction and extension like EBU-TT.


SMPTE also created a version of the TTML standard called SMPTE-TT. SMPTE did not decide on a subset of TTML for their purposes – it was simply adopted as a complete set. “This Standard provides a framework for timed text to be supported for content delivered via broadband means,…” (see SMPTE-TT).

However, SMPTE extended TTML in SMPTE-TT with an ability to store a binary blob with captions in another format. This allows using SMPTE-TT as a transport format for any caption format and is deemed to help with “backwards compatibility”.

Now, instead of specifying a profile, SMPTE decided to define how to convert CEA-608 captions to SMPTE-TT. Even if it’s not called a “profile”, that’s actually what it is. It even has its own namespace: “m608:”.


With all these different versions of TTML, I ask myself what a video player that claims support for TTML will do to get something working. The only chance it has is to implement all the extensions defined in all the different profiles. I pity the player that has to deal with a SMPTE-TT file that has a binary blob in it and is expected to be able to decode this.

Now, what is a caption author supposed to do when creating TTML? They obviously cannot expect all players to be able to play back all TTML versions. Should they create different files depending on what platform they are targeting, i.e. a EBU-TT version, a SMPTE-TT version, a vanilla TTML version, and a Simple Delivery Profile version? Should they by throwing all the features of all the versions into one TTML file and hope that the players will pick out the right things that they require and drop the rest on the floor?

Maybe the best way to progress would be to make a list of the “safe” features: those features that every TTML profile supports. That may be the best way to get an “interoperable TTML” file. Here’s me hoping that this minimal set of features doesn’t just end up being the usual (starttime, endtime, text) triple.


I just found out that UltraViolet have their own profile of SMPTE-TT called CFF-TT (see UltraViolet FAQ and spec). They are making some SMPTE-TT fields optional, but introduce a new @forcedDisplayMode attribute under their own namespace “cff:”.

by silvia at 18 September 2012 01:40

16 September 2012

Robert O'Callahan

Web Audio In Firefox

Let me clear up any confusion about what our plans are for audio APIs in Firefox.

Some MediaStream support has landed in Firefox 17. We have the ability to create MediaStreams containing the output of media elements and use them as a source for other media elements. I need to make some changes to those APIs based on feedback we start evangelizing them for real. We can also create MediaStreams via getUserMedia (when that feature is preffed on in about:config).

The work on MediaStreams Processing that I did as an alternative to the Web Audio API is on the back-burner. Our priority is to implement Web Audio. Our plan is to implement AudioNodes using the same infrastructure as MediaStreams under the hood --- to reduce code duplication and to ensure that Web Audio/MediaStreams integration is perfect. Some core infrastructure for MediaStreams that are produced by processing inputs to outputs --- ProcessedMediaStreams --- already landed, to support the features above. Currently Ehsan is working on the IDL/DOM API side and I have some more work to do on the MediaStreams infrastructure side. We don't have a specific date set for Web Audio support, but it is a high priority.

At some point we will revisit MediaStreams Processing to get the features that Web Audio is missing, e.g., seamless stitching together of an audio and video playlist from a series of clips. That is lower priority.

by (Robert) at 16 September 2012 22:50

12 September 2012

Dave Hall

Switching Installation Profiles on Existing Drupal Sites

In my last blog post I outlined how to use per project installation profiles. If you read that post and want to use installation profiles to take advantage of site wide content changes and centralised dependency management, this post will show you how to do it quickly and easily.

The easiest way to switch installation profiles is using the command line with drush. The following command will do it for you:

$ drush vset --exact -y install_profile my_profile

An alternative way of doing this is by directly manipulating the database. You can run the following SQL on your Drupal database to switch installation profiles:

UPDATE variable SET value = 'my_profile' WHERE name = 'install_profile';
-- Clear the cache using MySQL only syntax, when DB caching is used.

Before you switch installation profiles, you should check that you have all the required modules enabled in your site. If you don't have all of the modules required by the new installation profile enabled in your site, your are likely to have issues. The best way to ensure you have all the depedencies enabled is to run the following one liner:

drush en $(grep depedencies /path/to/my-site/profiles/my_profile/ | sed -n 's/depedencies\[\]=\(.*\)/\1/p')

Even though it is pretty easy to switch installation profiles I would recommend starting your project with a project specific installation profile.

by Dave at 12 September 2012 11:18

Scott James Remnant

Book Review: The Lies of Locke Lamora

The Lies of Locke Lamora After someone’s done with that social network, if they could implement something which lets me backtrack from content back to link I originally clicked to get it, that’d be great. I have no idea how this book got into my queue, I have a feeling it may have even been one of those cards you pick up in Starbucks. Anyway I digress.

The Lies of Locke Lamora is set in renaissance Venice and follows the story of the eponymous thief and confidence trickster as he attempts to con one of the city’s great noble families out of half of their fortune.

Ok, as befits the book’s hero, that was a slight lie.

The book isn’t set in renaissance Venice. It’s set in what renaissance Venice would have been, if it had been constructed on a planet with three moons, a thousand years before, by a long dead and departed alien race.

If the typical renaissance parts of the city were interspersed with giant structures of an alien material capable of holding and radiating light later in the day. If the citizens of renaissance Venice battled giant sharks for the entertainment of their peers.

Oh, and if there was magic.

So it’s like our world, but also very unlike our world. What we end up with is something akin to a Song of Ice and Fire, where the characters are very recognizable but the world perhaps isn’t.

And what wonderful characters they are! A failing of too many books is making the hero all-omnipotent; it’s one of the things I credit Harry Potter for, he actually needs his friends to win and likewise it is here too. Locke might be a great liar and conman, but he needs his fellow Gentlemen Bastards for the whole game; and all of them were trained together by the same priest who had plans.

The book takes an interesting narrative, interspersing segments in the presents with flashbacks to their training, or often to the recent past leading up to what just happened. It’s an interesting technique and often allows the author to side-step you and allow things to play out in a different way than you perhaps first thought. While some might find it jarring, the narrative is always consitent and never irrelevant so I found it a cute touch.

I found the characters and the story engaging and entertaining, frequently unable to put the book down; in particular an entire afternoon on the beach in San Diego engrossed in it, and more than one late night. In fact I enjoyed it so much I’m now reading the second in the series.


by scott at 12 September 2012 04:28

Book Review: Ready Player One

Ready Player One

You’d think that somebody would make a decent effort to make a social network around sharing recommendations of content like books and music, because some of the most interesting books that end up in my queue to read come from such recommendations via other means (IM mostly).

Ready Player One was such a book, a friend recommended it out of the blue, and the description looked interesting enough that I added it to my collection for later reading and completed it a couple of weeks ago.

The book has a charming idea; in the future the world is going to shit and everyone spends most of their time in a giant cross between Second Life and World of Warcraft. The creator of this dies and leaves a great treasure hunt involving 80s classic computer games and geek references, the reward being the keys to the system and his vast fortune. On this chase the story follows a single character as he attempts to solve the clues, and the friends he makes along the way.

In many ways it reminded me of a Neal Stephenson novel, especially Reamde; and I mean that in a complimentary way. It kept a reasonable pace throughout the narrative and sustained interest through all the different happenings. Though nothing truly surprising happens, it’s not about that, but about being along for the ride and chuckling at just how many references you can get.


by scott at 12 September 2012 04:09

11 September 2012

Giuseppe Maxia

My speaking engagements - Q4 2012

After a long pause in the speaking game, I am back.

It's since April that I haven't been on stage, and it is now time to resume my public duties.

  • I will speak at MySQL Connect in San Francisco, just at the start of Oracle Open World, with a talk on MySQL High Availability: Power and Usability. It is about the cool technology that is keeping me busy here at Continuent, which can make life really easy for DBAs. This talk will be a demo fest. If you are attending MySQL Connect, you should see it!
  • A happy return for me. On October 27th I will talk about open source databases and the pleasures of command line operations at Linux Day in Cagliari, my hometown. Since I speak more in California than in my own backyard, I am happy that this year I managed to get a spot here.
  • The company will have a team meeting in Nopvember (Barcelona, here we come!) and from there I will fly to Bulgaria, where I am speaking at the Bulgarian Oracle User Group conference. Here I will have two talks, one about MySQL for business, and the other is "MySQL High Availability for the masses".
  • A few days later, again on the road, in London, for Percona Live, with a talk on MySQL High Availability: Power, Magic, and Usability. It is again about our core products, with some high technology fun involved. I will show how our tools can test the software, spot the mistakes, fix the cluster, and even build a step-by-step demo.
See you around. Look for me carefully, though. I may look differently from how I have been depicted so far.

by Giuseppe Maxia ( at 11 September 2012 16:30

New strength for Continuent

It is public news now that Continuent has three new hires. I am particularly pleased with the news, as we are improving the team in three different directions:
  • Services and management, with Ronald Bradford, with whom we have crossed paths several times, first in the MySQL community activities, then as colleagues at MySQL AB, and again in community cyberspace.
  • Development, with Ludovic Launer, a senior developer with a long experience in development and software architecture. This is an excellent injection of new blood for our development team.
  • Sales, with Robert Noyes, who has worked in enterprise sales for 25 years, and comes at the right moment to reinforce our business in the moment of its biggest growth that I have seen since I joined the company.
Welcome to our new colleagues!

by Giuseppe Maxia ( at 11 September 2012 15:44

09 September 2012

Dave Hall

Managing per Project Installation Profiles

Unbeknown to many users, installation profiles are what is used to install a Drupal site. The two profiles that ship with core are standard and minimal. Standard gives new users a basic, functional Drupal site. Minimal provides a very minimal configuration so developers and site builders can start building a new site. A key piece of a Drupal distro is an installation profile.

I beleive that developers and more experienced site builders should be using installation profiles as part of their client sites builds. In Drupal 7 an installation profile is treated like a special module, so it can implement hooks - including hook_update_N(). This means that the installation profile is the best place for controlling turning modules on and off, switching themes or any other site wide configuration changes that can't be handled by features or a module specific update hook.

In an ideal world you could have 1 installation profile that is used for all of your projects and you just include it in your base build. Unfortunately installation profiles tend to evolve into being very project specific. At the same time you are likely to want a common starting point. I like to give my installation profiles unique names, rather than something generic like "my_profile", I prefer to use "[client_prefix]_profile". I'll cover project prefixes in another blog post.

After some trial and error, I've settled on a solution which I think works for having a common starting point for an installation profile that will diverge overtime using a unique namespace. My solution relies on some basic templates, a bash script with a bit of sed. I could have written all of this in PHP and even made a drush plugin for it, but I prefer to do this kind of thing on the command line with bash. I'm happy to work with someone to port it to a drush plugin if you're interested.

Here is a simple example of the templates you could use for creating your installation profile. The version on github is closer to what I actually use for clients, along with the build script.

core = 7.x
dependencies[] = block
dependencies[] = dblog


 * @file
 * Install, update and uninstall functions for the the PROFILE_NAME install profile.

 * Implements hook_install().
 * Performs actions to set up the site for this profile.
 * @see system_install()
function PROFILE_NAMESPACE_install() {
  // Enable some standard blocks.
  $default_theme = variable_get('theme_default', 'bartik');
  $values = array(
      'module' => 'system',
      'delta' => 'main',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'content',
      'pages' => '',
      'cache' => -1,
      'module' => 'user',
      'delta' => 'login',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
      'module' => 'system',
      'delta' => 'navigation',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
      'module' => 'system',
      'delta' => 'management',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 1,
      'region' => 'sidebar_first',
      'pages' => '',
      'cache' => -1,
      'module' => 'system',
      'delta' => 'help',
      'theme' => $default_theme,
      'status' => 1,
      'weight' => 0,
      'region' => 'help',
      'pages' => '',
      'cache' => -1,
  $query = db_insert('block')->fields(array('module', 'delta', 'theme', 'status', 'weight', 'region', 'pages', 'cache'));
  foreach ($values as $record) {

  // Allow visitor account creation, but with administrative approval.

  // Enable default permissions for system roles.
  user_role_grant_permissions(DRUPAL_ANONYMOUS_RID, array('access content'));
  user_role_grant_permissions(DRUPAL_AUTHENTICATED_RID, array('access content'));

// Add hook_update_N() implementations below here as needed.


 * @file
 * Enables modules and site configuration for a PROFILE_NAME site installation.

 * Implements hook_form_FORM_ID_alter() for install_configure_form().
 * Allows the profile to alter the site configuration form.
function PROFILE_NAMESPACE_form_install_configure_form_alter(&$form, $form_state) {
  // Pre-populate the site name with the server name.
  $form['site_information']['site_name']['#default_value'] = $_SERVER['SERVER_NAME'];

Some developers might recognise the code above, it is from the minial installation profile.

The installation profile builder script is a simple bash script that relies on sed.

# Installation profile builder
# Created by Dave Hall

FILES=" base.install base.profile"
SCRIPT_NAME=$(basename $0)

description="My automatically generated installation profile."

usage() {
  echo "usage: $SCRIPT_NAME -t target_path -s profile_namespace [-d 'project_descrption'] [-n 'human_readable_profile_name']"

while getopts  "d:n:s:t:h" arg; do
  case $arg in

if [ -z "$target" ]; then
  echo ERROR: You must specify a target path. >&2
  exit 1;

if [ ! -d "$target" -o ! -w "$target" ]; then
  echo ERROR: The target path must be a writable directory that already exists. >&2
  exit 1;

if [ "$ns_test" != "$namespace" ]; then
  echo "ERROR: The namespace can only contain lowercase alphanumeric characters and underscores ($OK_NS_CHARS)" >&2
  exit 1

if [ -z "$name" ]; then

for file in $FILES; do
  echo Processing $file
  sed -e "s/PROFILE_NAMESPACE/$namespace/g" -e "s/PROFILE_NAME/$name/g" -e "s/PROFILE_DESCRIPTION/$description/g" $file > $target/$file

echo Completed generating files for $name installation profile in $target.

Place all of the above files into a directory. Before you can generate your first profile you must run "chmod +x" to make the script executable.

You need to create the output directory, for testing we will use ~/test-profile, so run "mkdir ~/test-profile" to create the path. To build your profile run "./ -s test -t ~/test-profile". Once the script has run you should have a test installation profile in ~/test-profile.

I will continue to maintain this as a project on github.

by Dave at 09 September 2012 05:29

Selena Deckelmann

Feminist reading: Creating a wiki page, reading

This will be a series of blog posts about the reading I’m doing about feminism.

Over the years, I’ve been given a list of books like the The Feminine Mystique, The Second Sex (Vintage), and most recently Fire with Fire: The New Female Power and How to Use It.

I’ve read parts or all of those, and many other books. But I am still sitting here with a profound sense of dislocation about feminism. I don’t have a list of feminist philosophers or writers that I strongly identify with. I find a lot of the writing either polemical or overly academic. I would like to find the books and articles that I can identify with, learn from and share.

My first action is to create a wiki page with links to books and articles that I’m finding in a number of syllabi for introductory womens studies classes.

If you have a syllabus from a course you’ve taken that you can share with me, I’d love to see it.

The things I’ve read today include:

I am reflecting on all the readings, and if you’d like to join me in discussion, I’d love to have some discussion partners as I work through these texts.

by selena at 09 September 2012 01:41

07 September 2012

David Woodhouse

7 Sep 2012

Found myself shouting at the radio again today. These fucking retards with their petition to have opt-out filtering on Internet connections... obviously have no bloody clue what they're talking about. Everyone with a clue knows that the filtering doesn't work. You might as well legislate for the sun to shine in the middle of the night, or that π = 3.

Anyone with half a clue can always get around the filters; it only really prevents you from stumbling over such stuff by accident. Which wasn't very likely in the first place. And to my knowledge there has

never been a filtering system that hasn't suffered "feature creep" and been used to block access to things other than what it was originally purported to block. Like the one which blocked the whole of Wikipedia a year or two ago.

If you support this petition, that doesn't mean you're a bad person. Just stupidly naïve and clueless. It cannot work, and you make bad things happen by trying to persuade politicians to impose it. Please stop.

07 September 2012 19:41

06 September 2012

Selena Deckelmann

What features do developers get excited about in Postgres?

I’m here at DjangoCon in Washington, DC and thinking about what it is that developers are currently excited about in Postgres.

Postgres hackers are often very focused on solving our own problems, problems people bring up on our mailing lists and dealing with database scaling, replication and data management.

Developers using Postgres seem more interested in the features which make creating applications easier and removing complexity from architecture.

So, what are they interested in?

The features that I hear mentioned most often include:

(thanks to @ipmb for the list in a lightning talk today!)

What are the features you hear about from developers? Or if you’re a web developer, what are your favorite features in PostgreSQL?

by selena at 06 September 2012 15:14

Lev Lafayette

President's and Secretary's Combined Report to the Linux Users of Victoria, Inc, Annual General Meeting, 2012

To begin with it is necessary to say that circumstances has meant that this will be a report that combines the duties of President and Secretary, a matter that will be discussed further. Overall however, looking back over the past year, it can said that this has been another successful for Linux Users of Victoria. Starting from September last year, we hosted a very successful Software Freedom Day, which was extremely well-attended by relevant members of the community, providing an excellent opportunity for networking and planning.

read more

by lev_lafayette at 06 September 2012 07:17

Sridhar Dhanapalan

HTML5 support in Browse

One of the most exciting improvements in OLPC OS 12.1.0 is a revamped Browse activity:

Browse, Wikipedia and Help have been moved from Mozilla to WebKit internally, as the Mozilla engine can no longer be embedded into other applications (like Browse) and Mozilla has stated officially that it is unsupported. WebKit has proven to be a far superior alternative and this represents a valuable step forward for Sugar’s future. As a user, you will notice faster activity startup time and a smoother browsing experience. Also, form elements on webpages are now themed according to the system theme, so you’ll see Sugar’s UI design blending more into the web forms that you access.

In short, the Web will be a nicer place on XOs. These improvements (and more!) will be making their way onto One Education XOs (such as those in Australia) in 2013.

Here are the results from the HTML5 Test using Browse 140 on OLPC OS 12.1.0 on an XO-1.75. The final score (345 and 15 bonus points) compares favourably against other Web browsers. Firefox 14 running on my Fedora 17 desktop scores 345 and 9 bonus points.

Update: Rafael Ortiz writes, “For the record previous non-webkit versions of browse only got 187 points on html5test, my beta chrome has 400 points, so it’s a great advance!

The HTML5 test - How well does your browser support HTML5 (01) The HTML5 test - How well does your browser support HTML5 (02) The HTML5 test - How well does your browser support HTML5 (03) The HTML5 test - How well does your browser support HTML5 (04) The HTML5 test - How well does your browser support HTML5 (05) The HTML5 test - How well does your browser support HTML5 (06) The HTML5 test - How well does your browser support HTML5 (07) The HTML5 test - How well does your browser support HTML5 (08) The HTML5 test - How well does your browser support HTML5 (09) The HTML5 test - How well does your browser support HTML5 (10)

by Sridhar Dhanapalan at 06 September 2012 03:10

05 September 2012

Pia Waugh

Internet, government and society: reframing #ozlog & #openinternet

Having followed the data logging issue peripherally, and the filtering issue quite closely for a number of years, I am seeing the same old tug of war between geeks and spooks, and am increasingly frustrated at how hard it is to make headway in these battles.

On one hand, the tech/geek community are the most motivated to fight these fights, because it is close to our hearts. We understand the tech, we can make strong technical arguments but the moment we mention “data” or “http”, people tune out and it becomes a niche argument, easily sidelined. It is almost ironic that is it on these issues the Federal government have been the most effective on (mainstream) messaging.

The fact is, these issues affect all Australians. When explained in non tech terms, I find all my non-geek friends get quite furious, and yet the debates simply haven’t made it into the mainstream, apart from a few glib catch phrases here or there which usually err on the side of “well if it helps keep children safe…”.

I think what is needed is a huge reframing of the issue. It isn’t just about the filter, or data logging, or any of the myriad technical policies and legislation proposals that are being fought out by the technical and security elite.

This is about the big picture. The role of the Internet in the lives of Australians, the role of government in a digital age, and what we – as people, as a society – want and what we will compromise on.

I would like to see this reframing through our media, our messaging, our advocacy, and our outreach to non-tech communities (ie – MOST of the community). I challenge you all to stop trying to tell your friends about “the perils of data logging on our freedoms”, and start engaging friends and colleagues on how they use the Internet, what they expect, whether they think privacy is important online in the same way as they expect privacy with their snail mail, and what they want to see in the Internet of the future.

I had a short chat to my flatmate about #ozlog, staying well away from the tech, and here is what she had to say:

What annoys me is how the powers that be are making decisions that can or will affect our lives considerably without any public consultation. The general public should be educated on the implications of these kinds of laws and have a say. To me, this is effectively tampering with the mail, which has all the same arguments. If we start just cutting corners to “catch the bad guys” then we start losing our rights and compromising without consideration, potentially to no effect on crime. It’s a slippery slope.

Pamela Martin – flatmate and non-geek, she still has a VCR

It’d be great to see a series on TV about the Internet and society, something that gets normal people to talk about how they use the Internet, what they expect from the Internet, from government, and to work through some of the considerations and implications of tampering with how the Internet works. Some experts on security, networking, online behaviours and sociology would also be interesting, and let’s take this debate to the mainstream. The tech, security and politically elite too often disregard the thought that “normal” people will get it or care, but this is in fact, possibly the most important public debate we need to have right now.

I’ve written a little more on these ideas at:

Would love to hear your thoughts.

It is worth noting that during the big filter discussions in 2009/10 I was working for Senator Kate Lundy. Most of our correspondence up till that date were, to be frank, pro filter letters that argued that people wanted less porn to protect the children. IE – the arguments were generally idealogically based and little to do with the actual proposed policy, but supporting letters just the same. The Senator blogged about her thoughts on the issue which caused (over a few posts) several thousand comments, largely considered and technical comments against the policy which were really helpful both in building a case and in demonstrating that this is a contentious issue. I was and remain very proud to have worked for a politician with such integrity.

At the same time I saw a lot of people fighting against the filter using nastiness, personal attacks, conspiracy theories and threats. I would like to implore to all those who want to fight the good fight: take a little time to consider what you do, the impact of your actions and words, and whether in fact, what you do contributes to the outcome you are seeking. It is too easy to say “well it’s gonna happen anyway” and get all fatalistic, but I assure you, constructive, diligent and overall well constructed advocacy and democratic engagement does win the day. At the end of the day, they work for us, we just sometimes need to remind them, and the broader “us” of the fact.

UPDATE: This post was initially inspired by a well written SMH article which reported that the data logging issue had been put deftly back on the table (after it being shelved for being too contentious) with questionable claims:

Her apparent change of mind may be a result of conversations with the Australian Federal Police, who have long pushed for mandatory online data retention. Neil Gaughan heads the AFP’s High Tech Crime Centre and is a vocal advocate for the policy.

”Without data retention laws I can guarantee you that the AFP won’t be able to investigate groups such as Anonymous over data breaches because we won’t be able to enforce the law,” he told a cyber security conference recently.

Now, I’m not involved in Anonymous but I’m going to make an educated guess that there is probably a reasonably high rate of tech literate people who understand and use encryption and other tools for privacy and anonymity. Data logging is ineffective with these in place so the argument is misleading at best.

I was pleased and heartened to see the SMH article get a lot of attention and good comments.

This is only the beginning.

by pipka at 05 September 2012 21:12

Selena Deckelmann

While we’re here, let’s fix computer science education: DjangoCon keynote and resources

My keynote today is done, the resources list is here and the slides are below. I wrote slightly different text to address our experience here in the US, but a mostly-complete transcript of the talk is here.

A ton of people came up to me after the talk and we started talking about all the ways that we might be able to solve problems. I created a mailing list for our first few discussions. If you are a person that doesn’t like google groups, contact me, as I of course can set up something that’s outside of that infrastructure if we have enough people who’d prefer a different place to have this conversation.

We have a plan to contact teachers in our local communities, and ask them what they need that we as open source software developers could help them with. And we all agreed that want to build things, but we’re pausing for a minute to ask the teachers around us what they need first.

For some background, the key bits of reading you should do to get up to speed are the following:

  • Stuck in the Shallow End, a book about the current state of computer science education through the lens of Los Angeles area public schools
  • And, finally, here’s the storify from the talk.

    [View the story "DjangoCon US 2012 - Keynote: Selena Deckelmann" on Storify]

    by selena at 05 September 2012 19:41

    03 September 2012

    Tim Connors


    I'm too cheap to afford a power metre, but finally got some useful power estimates by tackling some 15% segments in today's ride (the steeper the hill, the less air resistance and rolling friction will matter. The extreme being a vertical slope, where the power output is going entirely into elevation gain (and rope friction)). If I am not mistaken, and if my bike really is 8kg and I'm still 64kg like I was last time I measured myself 5 years ago, and if I've got the mathematics right, and if I was a spherical-frictionless-air-resistance-less-cow, then the formula to feed into /usr/bin/units is:

    You have: metres*9.8metres/second**2 * (weight you+weight bike)kg / ( minutes + seconds)
    You want: watts

    In the case of one 84m elevation gain segment I climbed today in 3 minutes 3 seconds according to Strava (don't know that I believe a 1-minute climb of 45 metres though - is the mapping correct?), I was putting out a minimum of 323 watts (plus the amount needed to overcome a reduced amount of friction).

    Yes, I know Strava gives you power estimates too, but they seem crap and distorted by periods of not-climbing (the 1:20 is hardly constant).

    Meanwhile, I'm amused by this climb I used to do a lot before I got a smartphone or GPS (unfortunately, I only remember ballpark times too - maybe 7 minutes). Kinda rare for roads to tell contours just where to go that the sun doesn't shine:

    03 September 2012 12:28

    Paul McKenney

    Thank you for another great Linux Plumbers Conference!

    Linux Plumbers Conference this past week in San Diego was a great event, and in my role as Program Committee Chair, I thank the many people who provided the excellent content that makes this event what it is. The program committee had to make some tough decisions between worthy submissions. The submitters, both successful and otherwise, provided an excellent set of abstracts and presentations. The Microconference leads put together excellent events, including first-ever Plumbers Constraint-Framework, LLVM, and Android microconferences. Last, but by no means least, there was wonderfui audience participation, including a vibrant hallway track. (A number of the people I informally polled called out the hallway track as the most valuable part of the conference, which I believe to be a very good thing indeed!)

    I join all the attendees in thanking the Plumbers Organizing Committee and the Linux Foundation staff for providing the venue, scheduling, organization, food, and drink that powers these events.

    And I am already looking forward to next year's Plumbers! I hope to see you there!!!

    03 September 2012 00:55

    02 September 2012

    Robert O'Callahan

    Blast From The Past

    I haven't sung this song for years, but Gen, bless her, brought it back on Sunday. Stuart Townsend nails the important points in a moving and personal way.

    How deep the Father's love for us,
    How vast beyond all measure
    That He should give His only Son
    To make a wretch His treasure.

    How great the pain of searing loss,
    The Father turns His face away
    As wounds which mar the chosen One,
    Bring many sons to glory.

    Behold the Man upon a cross,
    My sin upon His shoulders;
    Ashamed I hear my mocking voice,
    Call out among the scoffers.

    It was my sin that left Him there
    Until it was accomplished;
    His dying breath has brought me life
    I know that it is finished.

    I will not boast in anything:
    No gifts, no power, no wisdom,
    But I will boast in Jesus Christ
    His death and resurrection.

    Why should I gain from His reward?
    I cannot give an answer.
    But this I know with all my heart
    His wounds have paid my ransom.

    by (Robert) at 02 September 2012 23:58

    31 August 2012

    Jonathan Lange

    Rigor mortis?

    I've been on a bit of a sanity bender recently: science, logic, evidence, experimentation, clarity and things like that. Here's a short list of some of the things I've been reading:

    If you're writing software then I recommend "Pretotype It" first, because it's likely to make you write less software, and that can only be a good thing. They are all great reads though.

    There are a few general themes: be explicit about your assumptions and try to verify or falsify them as soon as you may; do experiments; beware of certain mistakes or logical short-cuts; learn statistics; understand what you are saying. Wonderful notions all, but I'm not sure whether they are working out for me.

    Occasionally I'll get an email that has a couple of sentences but somehow manages to squeeze in all sorts of conflations, non sequiturs, and general fudging. It's hard to know where to begin. I can make a fair stab at analyzing the errors, but synthesizing a response that actually helps is very hard.

    Rigour also puts a restraint on rhetoric. It's hard to say something convincingly when you have a bunch of qualifiers dangling at the end. My writing (even now!) is slowed down as I notice the unfounded assertions and unstated assumptions that lie behind it.

    Also, much of this doesn't help you get from a vague, interesting intuition to a workable idea, from hunch to hypothesis, if you will. Sometimes a notion needs to time to grow before it's rejected as irrational or incorrect. Something like Thinking Hats can help here.

    This is all peanuts though. Do more science. Really.

    by Jonathan Lange ( at 31 August 2012 14:35

    28 August 2012

    Lev Lafayette

    OpenMPI 1.6.1 with PGI compilers installation issue

    When installing the latest version of Open MPI (Version 1.6.1) with PGI compilers (specifically 12.5), the installation fails.

    The following configure option are set:

    CC=pgcc CXX=pgcpp F77=pgf77 FC=pgf90 ./configure --prefix=/usr/local/${BASE}-pgi --with-openib --with-tm=/usr/local/torque/latest --enable-static --enable-share

    For Torque "latest" in this context is only 2.4.17, however that shouldn't be the issue. The make fails as follows:

    make[6]: Leaving directory `/usr/local/src/OPENMPI/openmpi-1.6.1/ompi/contrib/vt/vt/tools/vtunify'

    read more

    by lev_lafayette at 28 August 2012 00:28

    27 August 2012

    Lev Lafayette

    Dunlop Armour Review - Big W Purchase

    Following attempted postings at:

    I ride approximately 20km per day, to and from work. About half this distance is hills and bends (following a river) and about half is inner urban and mostly flat.

    For the past six months I have struggled daily with this bike.

    To change gears without the derailleur slipping the upper gears have to turned in the opposite direction to the lower gears. This is not documented in the user and maintenance manual.

    read more

    by lev_lafayette at 27 August 2012 06:12

    Stewart Smith

    New Jenkins Bazaar plugin release

    I’ve just uploaded version 1.20 of the Bazaar plugin for Jenkins. This release is based on feedback from users and our experiences at Percona.

    • Do a lightweight checkout instead of a heavyweight checkout (if “Checkout” is enabled)
    • Fix bug: lightweight checkout “update” would always fail as bzr update didn’t accept a repository argument. Switch to using bzr update followed by bzr switch. This should massively improve performance for those not doing a full branch.
    • Remove “Clean Branch” advanced option (replaced with “Clean Tree” option)
    • Add a “Clean Tree” advanced option. This will run “bzr clean-tree –quiet –ignored –unknown –detritus”, preserving the .bzr directory but doing the equivalent of wiping the workspace (starting with a fresh slate). This should massively improve performance for projects that do not have a clean build.
    • Clarify that Loggerhead is the repository browser used by Launchpad, and have a complete example of how to configure it.

    by Stewart Smith at 27 August 2012 05:10

    26 August 2012

    Selena Deckelmann

    FrOSCon: Mistakes were Made: Education Edition talk slides and notes

    I just finished giving my keynote at FrOSCon, and am pasting the notes I spoke from below. This was meant to be read aloud, of course. Where it says [slide] in the text is where the slides advance.

    Update: My slides are now available on the FrOSCon site.

    FrOSCon – Mistakes Were Made: Education Edition


    Thank you so much for inviting me here to FrOSCon. This is my first time visiting Bonn, and my first time enjoying Kölsch. I enjoyed quite a lot last night at the social event.

    Especially, I would like to thank Scotty and Holgar who picked me up at the train station, Inga who talked with me at length on Thursday night. All the volunteers who have done a terrific job making this conference happen. Thank you all so much for a wonderful experience, and for cooking all the food last night!

    And I promised to show off the laser etching on my laptop I had done here by the local hackerspace. I come from the PostgreSQL community, so I got an elephant etched into the laptop. It only costs 10 euro and looks awesome.


    I’ve also made a page of resources for this talk. I’ll be quoting some facts and figures and this pirate pad has links to all the documents I quoted.

    For those of you from countries other than Ireland, Great Britain, United States, German and Turkey – if you know where to get a copy of computer science curriculum standards for your country, please add a link. Right at the top of this pirate pad is a link to another pirate pad where we’re collecting links to curriculum standards.


    And finally, this talk is really a speech, without a lot of bullet points. So, the slides will hopefully be helpful and interesting, but occasionally I will be showing nothing on a slide as I speak. This is a feature, not a bug.


    For the past few years, I’ve been giving talks about mistakes, starting with problems I had keeping chickens alive in my backyard. Here’s a map of my failures. Scotty is familiar with the video that is online that tells the whole story of how all these chickens died.

    Next, I talked about system administration failures – like what happens when a new sysadmin runs UNIX find commands to clean up — and delete all the zero length files, including devices, on a system. Or how to take down a data center with four network cables and spanning tree turned off. Here’s a tip: it really only takes first cable.

    And most recently, I talked about hiring – how difficult it is to find the right people for tech industry jobs, how once you hire them, they might find another job way too quickly, and how the tech industry’s demand for skilled developers – and especially for developers with open source skills – is growing faster than we’re able to train people.

    Computer science enrollment at universities has decreased by about 3% since 2005 in the United States (from 11% of students down to 7% overall).


    At the same time the projected demand for CS and computer-related jobs will increase more than 50% by 2018, creating about 1.5 million new jobs in the US alone. Researchers say that even in places where enrollment in CS programs is up, companies report that they can’t trust that graduates have any of the fundamental skills that are necessary for new jobs.

    And these companies aren’t just in Silicon Valley – in Oregon (where I’m from), the Netherlands (where I landed before I got to FrOSCon) and from what I’ve heard these last few days, Germany, are all experiencing shortages in skilled developers.

    But I’m not here to talk about those things either.

    Today, I’m going to share some observations about computer science education. I believe that our skill shortages start at the earliest stages in our schools, and if the system is left as it is, open source will suffer the most.


    In a survey of 2700 FOSS developers, 70% had at least a Bachelors degree, and most discovered FOSS sometime between the ages of 18-22. This age, and this time in college is the perfect time to connect with and attract people into the free software lifestyle. And think about this, how much easier would recruitment be if every student at university was already exposed to computer science ideas when they were in primary and secondary school?


    You may not know this, but my husband, Scott, is a high school teacher. That’s where I got my German last name. He specializes in global studies, journalism and psychology.

    Recently, he joined forces with a friend of mine named Michelle Rowley to help teach women how to program with Python. Naturally, I volunteered to mentor in the classes that were offered.


    This is a picture from one of the classes. Before these workshops, I had never tried to teach anyone how to program.

    For the workshops, I mentored groups of 6 or 8 women over two days. We walked around the tables, answering questions and just observing as some students learned about variables, conditionals and functions for the very first time. I enjoyed getting to know a group of women who were really excited and looking forward to applying the skills they were about to learn.

    Mentoring made me feel great, but it was also a little shocking.


    Our first lessons explained file system navigation, the command-line and how to set up a GUI text editor. Some people quickly became lost and confused. The connection between a graphical filesystem browser and the command-line was very difficult.

    Most students had never opened up a terminal and then, beyond that, typed a command into a terminal before. But that’s not all that surprising. What did surprise me was that some had never looked at files through the graphical file browser, instead using menus to find recently used files, or saving everyone into just one folder, or just using web-based file management tool like Google Docs. For those women, I found myself at a loss. I sat thinking during a break about how exactly I could explain a filesystem to someone who had never been exposed to the idea before. I thought hard about real world examples that would help me explain.

    My hope is that you’re all thinking now about metaphors you’d use, pictures you’d draw and what you’d say to a person who didn’t understand filesystems. Or maybe, now that I’ve said that, you’re thinking about it now. Maybe you’re thinking about a person in your life who you might teach this exact lesson to. A parent, a brother or sister, a niece, your daughter or son.

    I hope you are thinking, because I want to ask each of you to do something after this talk is done. I want you to sit down with an important person in your life who doesn’t understand a computer science concept like filesystems and teach them. My guess is, with the right lesson, you can teach this to someone in an hour. And if we don’t have the right lesson now, if enough of us try this out, we’ll end up with the best lesson in the world for teaching a person what filesystems are, using real-world examples and the feedback from all our loved ones about what worked and what didn’t.

    There’s an important reason why I want you to do this.


    I want us to demonstrate that sharing lessons works. UNESCO recently made the Paris Declaration. In it they said that they wanted to encourage the open licensing of educational materials produced with public funds. Recently, I contacted an organization to ask if I could transcribe a couple lessons that they’d shared in PDFs into text form to make them easier to use and share them in a git repo. My idea was: share the lessons and let people submit changes and observations as diffs.

    The organization that published the lessons told me that they couldn’t allow me to use their lessons in this way, because the research was government funded.

    I believe that we can demonstrate to teachers and the organizations creating curriculum how useful it can be to share, so that no one gives me that excuse ever again.

    I want to show teachers how interesting and engaging it is to let people take a lesson, try it out and report back. These, after all, are the same skills we need to work on open source software Except we’ll apply this skill to teaching a lesson.

    So, get ready. I really am going to ask you all to do that.


    I started understanding what programming was my second year of college. I’d spent almost a year doing tech support at my university, getting the job after some friends taught me how to install linux from floppies and enough UNIX commands to be dangerous. One day, a friend sat me down and tried to teach me PASCAL from a book. The experience left me frustrated, and even angry. I remember thinking that very little of it made sense, and I felt very stupid. I decided at that moment that I never wanted to learn programming.

    Later, a different friend from college, Istvan Marko, sat me down in front of a command line prompt and showed me a shell script. He told me about his work automating configurations and showed me how to set up linux systems way more quickly than I could by entering commands one at a time. The automation blew my mind.

    What he modeled for me in shell scripting immediately made my work life better. The tools he showed me applied to what I already knew about computers and installing new linux systems, and I saw immediately how I could use it all.

    A whole world opened up as I thought through problem after problem, wrote little scripts to recompile kernels, and copied tricks from other friends like timing commands or redirecting output from STDERR to STDOUT. In the beginning I was just copying and studying because I was a little afraid of making mistakes — automation was so powerful! But soon I was remixing and writing my own stuff from scratch. I was hooked.

    The next year, I switched my degree program from Chemistry to Computer Science.

    So, I don’t think every person exposed to shell scripting will want to become a developer. But there were two things that happened for me in that lesson: what Istvan managed to get right was teaching me in my “zone of proximal development” or ZPD. It’s an education term that basically means — it was just challenging enough to be interesting, but not so hard that I got completely frustrated. This zone is where people learn things really well.


    The other important thing that happened was that the skill my friend taught me was something I could immediately apply elsewhere. But first, he worked with me, what we call guided practice, to rewrite a simple shell script with my username as a variable. Then I went off on my own, writing my own scripts to start and stop network interfaces and automatically connect to servers and run commands. This is what we call independent practice. And later, when I started writing Perl, I wrote my Perl exactly like I was writing bash scripts. I had just generalized my skills to another language! Maybe in the worst way possible!

    But what all those things were – the modeling, the guided practice, the independent practice and the generalization – was how I really learned a new skill. I learned how to think about tasks with automation in mind, with parameters and variables in mind. And I really, really learned it well because my friend took the time to make sure that I learned it.

    My experience of having a real-world application for a new skill matches up with research about keeping women and minorities, and many men, engaged in computer science. The process of customizing curriculum for the life experience of students is called contextualization. And of course, each person’s context is different. Part of the challenge for educators is designing courses that can be relevant to students from a variety of backgrounds, perhaps very different than the teacher. Like, teaching a bubble sort of student names in the physical world by having kids get up and move around, instead of teaching sorting only with numbers on a screen. Or using election data from local elections that affect students lives to teach about database schema and report design.

    Or, when you’re thinking about this lesson you’re going to teach about filesystems, find a way to tie it to the life of the person you’re teaching. Have they ever “lost” a file that you later helped them find with the filesystem “search”? Have they ever lost a hard drive, or part of a directory, or lost something “in the cloud”. Have they created files on their computer? Do they know where those files are? Or what “where” means on a computer? Could you maybe draw some kind of structure to help them think about how the files are organized? I’m sure you’ll come up with something great to fit your student’s experience.


    Some people believe that the reason why we don’t have enough people with the right kinds of developer skills is because university CS programs just aren’t teaching the right things. And, honestly, a lot of programmers never went to college for computer science!

    For all of us at FrOSCon, who are often trying to hire people with open source specific skills, it’s certainly true that very few universities are training students for that. But I think there’s a much bigger problem than the university programs out there.


    If you look at CS curriculum versus math, science, history or literature, you’ll find that there’s almost no computer science taught in primary and secondary schools. In the US, over the past 10 years we have lost 35% of the comp sci classes taught in high school, which is 9-12 grades. In addition, we have very few computer science teachers, and inconsistent standards for testing and qualifying CS teachers — leading to a teacher shortage in the places where CS is actually wanted by a school.


    I talked with Inga Herber, one of the core organizing volunteers here at FrOSCon, on Thursday night. She’s is preparing to teach secondary school computer science here in Germany. Her observations were that here, there’s a strong movement in the schools to get more computer science classes, yet there are still not many qualified teachers.

    But worse than the lack of classes and teachers, if you look at what is being taught in the few places where something like CS is available, we see classes like basic keyboarding — which drills to help you type faster — are given the “computer science” label. Also — there are classes on how to use Excel and Word, searching the internet, or how to program in obscure or outdated languages, which for students often means just copying and pasting functions out of books. We’re actually teaching the “copy pasta” form of programming in our schools!

    The most promising classes in high school would seem to be those that teach students how to take apart and put back together computers. Knowing the parts of a computer is certainly useful. But learning computer science by taking apart and putting computers back together is like learning to read by tearing books apart and putting them back together. (thanks to Mike Lee for that analogy) In the same way that we don’t think of bookbinding as essential for literacy, taking apart and putting together computers, while fun and educational, will not teach computer science literacy.


    What we really need to teach students has nothing to do with keyboards, the office suite or motherboards. In the words of the the “Exploring computer science” curriculum, we need to teach “computational thinking practices of algorithm development, problem solving and programming within the context of problems that are relevant to student’s lives.”

    This idea of “computational thinking” comes via Jeanette Wing, who wrote about this idea for the ACM in 2006. “Computational thinking is a fundamental skill for everyone, not just for computer scientists. To reading, writing, and arithmetic, we should add computational thinking to every child’s analytical ability. Just as the printing press facilitated the spread of the three Rs, what is appropriately incestuous about this vision is that computing and computers facilitate the spread of computational thinking.”


    And she provides a much longer definition later, that includes this, my favorite part:

    [it's] A way that humans, not computers, think. Computational thinking is a way humans solve problems; it is not trying to get humans to think like computers. Computers are dull and boring; humans are clever and imaginative. We humans make computers exciting. Equipped with computing devices, we use our cleverness to tackle problems we would not dare take on before the age of computing and build systems with functionality limited only by our imaginations.
    Jeanette Wing’s description makes me think about a world where computer science would be inspiring to everyone. And not just inspiring, but creative and fun.


    It makes me think of the great Ada Lovelace comics I’ve seen like this one by Sydney Padua, where Charles Babbage and Ada Lovelace, creators of the first computing machine, are crimefighters. The heroes are quirky, smart and solving devilishly tricky problems.

    Another show I love the new Sherlock, a BBC TV show, for how wonderfully geeky he is in his problem solving, and how he often uses silly pranks with technology to show off. The first episode has him sending group texts as a sarcastic counterpoint to a police chief’s press conference.

    In the same way that Einstein and Feynman are crucial parts of the storytelling around physics, we need to talk more about the heroes of computer science, about what made them human, and interesting and not like computers at all.

    And armed with these fascinating stories, we can share them as part of our teaching. Because this is all so fun — this conference, is full of people with great stories, working on an event that spans seven years. There have been great times, and near disasters, and triumphs. Those can be our examples and starting points for explaining the computer science that we want our friends and family to understand.


    As I’ve done my research, its become painfully clear how separated open source developers are from teachers. There’s a lot of reasons why this might be. I married a teacher, but I don’t think advocating for marriage between teachers and open source people is a scalable solution.

    So, other than marriage, how can we invite more teachers into open source?

    One barrier to communicating with teachers is being able to speak the language of education. This is not just the terms teachers use for their work. It’s also having the experience of and relating to teaching.

    Teaching is incredibly difficult. It’s both mentally and physically challenging. When I finished mentoring students for one day and teaching a single hour-long lesson, I was ready for a beer and sleep. I can’t imagine doing that every day.


    But teachers – they do this for 8 hours a day, every day. A valuable experience for every developer is to just for a few minutes, to teach something new, in person and without a computer. I don’t think you need to get in front of a classroom to experience this.

    What you can do is schedule an hour with a friend, a colleague or a family member and try to teach. See if you can get them to really understand, and then demonstrate the new skill back to you. Like with the filesystems – after you explain, see if they can do something specific — like find a special file (easter egg planting!), or explain back to you what it is that you taught them, or even better: watch as they try to explain filesystems to someone else.

    Once you’ve had the experience of helping someone master a brand new skill, you’ve started down the path that teachers walk every day. This is a shared experience, a point of empathy you can draw on if you ever have the chance to talk directly to a teacher.


    For too long, free software advocates have focused on getting open source software into classrooms without understanding exactly what that means to teachers. When something goes wrong with my servers or my laptop, it’s my job to figure out what is wrong and to fix it. I have time in my day for mistakes, and for bugs.

    Teachers, on the other hand, have a certain number of hours in a year with students. They count them! That time is carefully scripted, because teaching is actually very difficult. Teachers can’t improvise excellent teaching when the computers they are using crash, or the software doesn’t work the way they expected, or the user interface changes suddenly after an upgrade. All the things that I think of as features, for teachers are another thing that takes away time they would spend creating lessons and teaching students. This is why I think free software is not more widely used in schools.


    I do not mean to diminish the efforts of the many awesome projects like Skolelinux, a school-specific Linux distribution based on Debian. But if we look at the software that runs grading and attendance, the software that kids use to play games, and the operating systems on teacher computers — this software is largely still proprietary.

    I hope that I can plant a seed of empathy in you all for what teachers are up against. Think about how much time that you spend considering the filesystem lesson you’re going to teach, for example. My husband was given one hour per day to plan for 7 hours of teaching. I spent nearly 100 hours preparing for this keynote. The ratio of preparation time to instruction time is terrifyingly small for professional teachers.


    If open source contributors all experienced what in-person teaching is like to the non-technical people in our lives, learning to use modeling, guided practice, independent practice and generalization in our own lessons about open source technology, we will develop a common vocabulary to talk with teachers. In the same way that in free software we share a vocabulary that starts with freedom, source code and sharing.

    And once we can talk with teachers, and we do so on a regular basis, we can ask them what it is that they really need, and how we as open source experts can help them make schools and teaching even better. Because, really, teachers and the free software movement are natural allies in our efforts to share information.


    We have a tremendous problem ahead of us. There aren’t enough people who understand the fundamentals of computer science. And a lot is at stake.

    We’re in an era where privacy, financial security and our elections are managed by software. If we all get this right, then software we create will also be used to fight corruption, solve important problems and make us all more free.

    Before I leave, I want to share a story from 2009. This isn’t a free software story, not yet, but it’s about the power of computational thinking when applied to the democratic process.


    So in 2009, I was invited to come teach a class about PostgreSQL. I travelled to Ondo State, Nigeria, specifically to Akure.


    Here’s a picture of my students. They had degrees in computer science or taken programming classes, and several were professional developers.


    It was from them that I learned how the Governor of Ondo state, Olusegun Mimiko, won his election. He was running against former Governor Agagu, the People’s Democratic Party candidate, which is also the majority party across Nigeria.


    You may not have heard about this, but back in 2007 when the elections were held, there was country-wide unrest. United Nations observers reported violence, and accusations of voter fraud were raised.


    So, once the ballots were counted, Mimiko had lost.


    But, his campaign had been so sure they were going to win because of poll results.


    So, they filed a lawsuit and got ahold of the ballot boxes for a recount. And it was at this point where they did something different.


    The way that you vote in Nigeria is with a thumb-print next to the candidate you select on a paper ballot. So, if there was fraud, the Mimiko team reasoned, you would have lots of ballots with the same thumb print. A local group of techies put together a plan. They would electronically scan in all the ballots and then have someone validate fingerprints and find duplicates.


    They searched the world for a fingerprint expert, and found Adrian Forty in Great Britain. Adrian Forty and his team analyzed all the ballots, and they found a few duplicates.


    In fact, they found 84,814 duplicate fingerprints. In one case a single fingerprint was used 300 times.

    After a two year court battle, finally, they won. :) But the work was just beginning.

    One of the places my colleagues took me was Idanre Hill, which is on the tentative world heritage site list. This is a picture of a handrail that was cut by the outgoing government. My colleagues said this in Yoruba which means “left like thieves.” They won the election, but got no help from the outgoing government to transition to power.


    Of course, the method for detecting voter fraud was viral. The expertise in counting fingerprints has been shared with neighboring states, and similar fraud was uncovered and stopped in Osun State as well.


    The new government in Ondo State has been very focused on IT initiatives, and in particular focused on what using cell phones to connect citizens with their government can do. One initiative gave all new mothers cell phones to stay in touch with their doctors. The cell phone program resulted in reducing the number of mother and child deaths to just 1 last year, a 35% drop in mother and infant mortality. Their goal is a 75% reduction in infant mortality by 2015.


    This last picture was taken as two friends and I hiked up Idanre Hill.

    Which brings me to what I want you all to do.

    We need to teach people how to ask the right questions, to be suspicious or satisfied by the answers they get to their questions. We need to teach people how to break apart problems into understandable chunks instead of assuming that they will never understand a complicated process.

    And we need to teach them the value of sharing source code. What it means to have software freedom, and how much it matters to us that everyone has the opportunity to learn from and build upon the work of others.

    I believe that we can demonstrate again, to the world, how useful it can be to share, how interesting and engaging it is to let people take a lesson, try it out and report back.

    Think about filesystems. Think about your friends and family. Who could you spend an hour with, teaching them an important skill that will help them understand our world of computers?

    Thank you very much for your time today.

    To encourage you all to do this, I created a little website where you can publicly say that you’re going to try to teach a lesson to someone. The authentication system only supports twitter right now – very sorry. But I have some code and was planning on hacking in email login this afternoon. I also have published the code on Github and linked to it from the site. I hope that you’ll have a look, and certainly if you find bugs, let me know.

    by selena at 26 August 2012 22:01

    Sage Weil

    v0.51 released

    The latest development release v0.51 is ready.  Notable changes include:

    • crush: tunables documented; feature bit now present and enforced
    • osd: various fixes for out-of-order op replies
    • osd: several rare peering cases fixed
    • osd: fixed detection of EIO errors from fs on read
    • osd: new ‘lock’ rados class for generic object locking
    • librbd: fixed memory leak on discard
    • librbd: image layering/cloning
    • radosgw: fix range header for large objects, ETag quoting, GMT dates, other compatibility fixes
    • mkcephfs: fix for default keyring, osd data/journal locations
    • wireshark: ceph protocol dissector patch updated
    • ceph.spec: fixed packaging problem with crush headers

    Full RBD cloning support will be in place in v0.52, as will a refactor of the messenger code with many bug fixes in the socket failure handling.  This is available for testing now in ‘next’ for the adventurous.  Improved OSD scrubbing is also coming soon.  We should (finally) be building some release RPMs for v0.52 as well.

    You can get v0.51 from the usual locations:

    by sage at 26 August 2012 15:57

    25 August 2012

    Selena Deckelmann

    Europe’s open source advantage

    I had this phrase “europe’s open source advantage” rolling around in my head Friday as I helped pack 1500 conference swag bags. We had a team of at least twelve people standing and seated in an assembly line for two hours to complete the task.

    And this is what always happens at the volunteer-run free and open source conferences. I was told that somewhere around 70 volunteers would help out today, and it’s felt like easily twice that many people have been wandering around and pitching in today.

    After we were done, the woman pictured above, brought conference-themed cookies that she bakes every year for the organizing team.

    Attendance at FrOSCon is estimated at 1500. FOSDEM is estimated at about 5000. Chaos Communication Congress had an attendance of 4230 in 2008. All three are volunteer organized, focused on free software, and software freedom (although CCC is also about hacking, security and politics, many people I know go to 2 or more of these events).

    FrOSCon has been around for seven years, inspired into creation by the organizer’s trip to FOSDEM, another terrific free and open source conference in Brussels, Belgium. What struck me at FOSDEM, is the same feeling I’m having here in Köln/Bonn.

    It’s a privilege to be here. Organizers are excited and smiling and relaxed. Speakers feel obligation to take controversial positions — like I’ve heard more than once in the last 24 hours that “if you value freedom, you won’t buy Apple products.” Also: “What do I care about patents? I live in Europe.” And as I look around, I’m one of maybe 5% of people with a Mac laptop. (Far more people have iPhones.)

    I think about our conferences in the USA, and we could learn some things. Both in terms of attendance and in terms of our vision. On the point of where exactly we are losing track of the activist spirit clearly on display here… maybe it has to do with our proximity to Silicon Valley, where I was recently told “charitable giving here is often in [the] form of angel investing.”

    We don’t seem to feel an obligation to volunteer and create these large general, self-sustaining conferences. We certainly have large commercial conferences, and smaller generalist conferences. SCALE I think is one example of a community that’s created a sustainable community. And I’ve heard SE-LinuxFest is growing very quickly. So maybe we’re at a turning point?

    I’m giving a keynote tomorrow about computer science education. What I’m really going to talk about is computational thinking. It’s a relentless decomposition of problems, algorithms for problem solving and the practical application of those ideas – in code or not.

    That’s the kind thinking I believe leads some of us from “free as in freedom” for software to the value judgements about individual hardware purchases. Or, sometimes it leads us to find space in our communities for people who exist somewhere along the freedom spectrum. :)

    I’ve had a chance to catch up with old friends, and make more than a few new ones. Mostly I’m looking forward to tonight’s BBQ, even if it rains. Henrik tells me that it’s what sets the whole tone for FrOSCon. People coming together to eat and drink and get to know one another over a shared feeling of belonging, out from behind their screens. And also to be openly critical of the ideas, organizations and products that threaten the foundations of free software.

    by selena at 25 August 2012 11:29

    23 August 2012

    Travis Reitter

    Inbox Zero and GTD: a personal success story

    After years of attempting to apply the philosophies of Inbox Zero and Getting Things Done (including, appropriately, failing to finish the GTD book twice), my personal and work inboxes are empty and I've never felt so on-top of my tasks. In no particular order, here are some changes I've made that have helped me out:

    On top of Slogen
    Forget what you're standing on and focus.

    I read The 4-Hour Work Week, which touches on Inbox Zero and adds some interesting productivity-increasing ideas, including giving up on boring articles and the controversial "don't waste time reading news". I haven't stopped reading news, but I feel like I've gotten a little more efficient at it. I mostly ignore mailing lists (except those for my immediate projects). The only reason I'm subscribed to some is so that I can reply to them on the occasion I'm CC'd or someone brings up a specific discussion to me.

    Along the line of distractions, I've been training myself to catch them before they happen. Because there's increasing evidence that multi-tasking reduces productivity (and it seems obvious to me anyhow), I push myself to not switch tasks unless I absolutely have to. And I actively ignore anything that I know will needlessly steal my focus. This includes only checking up on IRC periodically (and mostly just to check for messages that have been highlighted for me). If it's urgent, people will send me a direct message (and Gnome Shell will give me a nice notification).

    I believe the author of 4HWW intentionally mentions keeping only calendar events, not tasks. I've always liked the idea that you will remember anything important and anything minor will come up again (particularly because I've struggled with far too many tasks in my GTD system). But that seems too extreme/unworkable for me, so the key has just been being much more aggressive in deleting tasks that sit too long (knowing I'll never get to them) and doing periodic sweeps through each list. If you have trouble with that, you might want to create a "some day" list to move neglected tasks to. Then, just make a point to ignore that list. I think I'm nearly ready to delete mine.

    Another part of minimizing mental burden has been closing application tabs and windows and conversation notifications as soon as I can. When I finish a task, I close everything related to it (even if I think I'll use some of them later). The stress saved by this reduction in visual noise is much greater than occasionally having to re-open something sooner than expected (which is incredibly rare, as you might imagine).

    Your screen should not look like this.

    And the latest change I've made, just before reaching inbox zero/GTD zen, has been to batch process my inboxes. I had a terrible habit of glancing at mail, flagging it important and unread, then moving on. It turns out that I subconsciously skip over bold, red lines of text. This was a horrible priority inversion that lead me to sporadically clearing out the simple mail while ignoring the high-priority, difficult tasks.

    Now, when I do my few daily mail passes, I do two quick passes (always in oldest to newest order, never skipping any):
    1. Add tasks, file mail
      1. parse out new tasks to my GTD system (setting due date and priority appropriately)
      2. if it needs a trivial or no response, give it immediately, then file
      3. otherwise, move on to the next letter
    2. Give non-trivial responses
      1. Read the letter in detail
      2. Respond to each point necessary

    Now, with an empty inbox, I can get work done by just drilling through items in my work or personal task list in order from highest to lowest priority. It's a lot harder to skip around, ignoring the important tasks, when they're sitting in front of you, in the exact order you've chosen yourself.

    I still have things to improve (I really wish I had one place to manage all my tasks, not several), but I finally feel in control of my day-to-day goals!

    23 August 2012 20:38

    22 August 2012

    Giuseppe Maxia

    MySQL 5.6 replication gotchas (and bugs)

    There has been a lot of talk about MySQL 5.6 replication improvements. With few exceptions, what I have seen was either marketing messages or hearsay. This means that few people have really tried out the new features to see whether they meet the users needs.

    As usual, I did try the new version in my environment. I like to form my own opinion based on experiments, and so I have been trying out these features since they have appeared in early milestones.

    What follows is a list of (potentially) surprising results that you may get when using MySQL 5.6.
    All the examples are made using MySQL 5.6.6.

    Gotcha #1 : too much noise

    I have already mentioned that MySQL 5.6 is too verbose when creating data directory. This also means that your error log may have way more information than you'd like to get. You should check the contents of the error log when you start, and either clean it up before using it on a regular basis or take note of what's there after a successful installation, so you won't be surprised when something goes wrong.

    Gotcha #2 : Innodb tables where you don't expect them

    Until version 5.5, after you installed MySQL, you could safely drop the ib* files, change the configuration file, and restart MySQL with optimized parameters. Not anymore.

    When you run mysqld with the --bootstrap option (which is what mysql_install_db does), the server creates 5 innodb tables:

    select table_schema, table_name
    from information_schema .tables
    where engine='innodb';
    | table_schema | table_name |
    | mysql | innodb_index_stats |
    | mysql | innodb_table_stats |
    | mysql | slave_master_info |
    | mysql | slave_relay_log_info |
    | mysql | slave_worker_info |

    The slave_* tables are needed for the safe crash slave feature, which we'll cover later. The innodb_*_stats tables are as documented at Innodb persistent stats, and they seem to contain almost the same info of the tables with the same name that you find in Percona Server INFORMATION_SCHEMA. I can only speculate why these tables are in mysql rather than in performance_schema.

    Another side effect of this issue is that, whatever setting you want to apply to innodb (size of the data files, file-per-table, default file format, and so on) must be done when you run mysqld --bootstrap.

    Gotcha #3 : Global transaction IDs and security

    The information about Global transaction ID is not easy to locate. But eventually, searching the manual, you will get it. The important information that you take from this page is that this feature only works if you enable all these options in all the servers used for replication:


    The first two options are not a surprise. You need them for replication anyway. Check.

    The third one is puzzling. Why would you want this option in a master? But then you realize that this will allow any server to be promoted or demoted at will. Check.

    gtid-mode is the main option that needs to be enabled for global transaction IDs. Check

    The last option forces the server to be safe, by using only transactional tables, and by forbidding things like temporary tables inside transactions and create table ... select. Which means that if you try to update a MyISAM table in the master, the statement will fail. You won't be allowed to do it. Check?

    The trouble is, if you enable gtid-mode=ON (with its mandatory ancillary options), you can't run mysql_secure_installation, because that utility needs to delete anonymous users and clean the 'db' table for anonymous usage of the 'test' database.

    The workaround is to enable GTID after you secure the installation, which means one more server restart.

    Gotcha #4 (bug): multi thread slave won't work without safe-crash slave tables

    To enable parallel replication, you need to change the value of 'slave_parallel_workers" to a value between 1 and 1024.

    show variables like '%worker%';
    | Variable_name | Value |
    | slave_parallel_workers | 0 |
    1 row in set (0.00 sec)

    slave1 [localhost] {msandbox} ((none)) > stop slave;
    Query OK, 0 rows affected (0.06 sec)

    slave1 [localhost] {msandbox} ((none)) > set global slave_parallel_workers=5;
    Query OK, 0 rows affected (0.00 sec)

    slave1 [localhost] {msandbox} (mysql) > start slave;
    Query OK, 0 rows affected, 1 warning (0.05 sec)

    slave1 [localhost] {msandbox} ((none)) > select * from mysql.slave_worker_info\G
    Empty set (0.00 sec)

    What the hell? The workers table is empty.

    I know the cause: the slave_worker_info table is not activated unless you also set relay_log_info_repository='table'. What I don't understand is WHY it is like that. If this is documented, I could not find where.

    Anyway, once you are in this bizarre condition, you can't activate relay_log_info_repository='table', because of the following

    Gotcha #5 (bug) : master and relay_log repository must be either set forever or they will fail

    After we have activated parallel threads, without enabling table repositories, you can't easily get to a clean replication environment:

    set global relay_log_info_repository='table';
    start slave;
    ERROR 1201 (HY000): Could not initialize master info structure; more error messages can be found in the MySQL error log
    And the error log says:
    120822 14:15:08 [ERROR] Error creating relay log info: Error transfering information.

    What you need to do is

    • stop the slave
    • enable both master_info_repository and relay_log_info_repository as 'table'
    • set the number of parallel threads
    • restart the slave

    slave1 [localhost] {msandbox} (mysql) > stop slave;
    Query OK, 0 rows affected (0.02 sec)

    slave1 [localhost] {msandbox} (mysql) > set global master_info_repository='table';
    Query OK, 0 rows affected (0.00 sec)

    slave1 [localhost] {msandbox} (mysql) > set global relay_log_info_repository='table';
    Query OK, 0 rows affected (0.00 sec)

    slave1 [localhost] {msandbox} (mysql) > set global slave_parallel_workers=5;
    Query OK, 0 rows affected (0.00 sec)

    slave1 [localhost] {msandbox} (mysql) > start slave;
    Query OK, 0 rows affected, 1 warning (0.01 sec)

    slave1 [localhost] {msandbox} (mysql) > select count(*) from slave_worker_info;
    | count(*) |
    | 5 |
    1 row in set (0.00 sec)

    This sequence of commands will start parallel replication, although MySQL crashes when restarting the slave.

    Gotcha #6 : Global transaction IDs not used in parallel threads

    Global transaction IDs (GTIDs) are very useful when you need to switch roles from master to slave, and especially when you deal with unplanned failovers. They are also a great simplification in many cases where you need to identify a transaction without getting lost in the details of binary log file and position.

    However, one of the cases where GTIDs would have been most useful, they are not there. The table mysql.slave_workers_info still identifies transactions by binary log and position. Similarly, CHANGE MASTER TO does not use GTIDs, other than allowing the automatic alignment (MASTER_AUTO_POSITION=1). If you need to perform any fine tuning operations, you need to revert to the old binary log + position.

    by Giuseppe Maxia ( at 22 August 2012 15:31

    Simon Horman

    Chiz. Horman Textile

    The first of my wife's textiles are available and our online shop is now open.

    22 August 2012 07:12

    21 August 2012

    Sage Weil

    Summer Adventures with Ceph: Building a B-tree

    Greetings! I am a summer intern at Inktank. My summer project is to create a distributed, B-tree-like key-value store, with support for multiple writers, using librados and the Ceph object store. In my last blog post, I wrote about the single client implementation I created to start out with. Over the last several weeks, I’ve had great fun and have learned a lot working on my project. I designed and implemented an algorithm for making my program work for an arbitrary number of clients. I still have more to do – in particular, I’ve been changing the algorithm significantly as I encounter bottlenecks during Teuthology testing – but the core of my project is complete.

    I was faced with the problem of how to allow multiple concurrent operations without causing interference that could leave the system in an inconsistent state. The Ceph object store provides atomic operations on a single object, but I sometimes need to atomically change multiple objects. When splitting or merging a leaf, I have to change the leaf object and the index object without making it possible for other clients to see an in-between state.

    All of the papers I read that dealt with this issue assumed either a single computer with shared memory and locking, or a locking service managed by a dedicated server or a Paxos cluster. In a locking system, the process (or client) modifying an object ensures exclusive access to the object for the entirety of the modification, and all other threads (or clients) that attempt to access that object block until the lock is released. Ceph does not have a lock service, as such setups are expensive. A locking system is pessimistic – that is, it assumes contention is likely. This is efficient on a single machine with shared memory, but replicating that functionality with a lock manager on a distributed system creates a bottleneck. In the probable use cases for my key value store, there are likely to be very large numbers of keys and very little contention. I needed to design an optimistic algorithm that took advantage of this low probability of contention. In an optimistic system that does not use a lock manager, the basic mechanism of ensuring that clients do not interfere with each other is to have them fail (or roll back changes and retry) if they discover that another client has made a change that interferes with its planned operations.

    I needed to design an algorithm that would guarantee the following, even in cases of splits and merges:

    • Forward progress, meaning that each client eventually makes progress. In particular, there is no combination of operations that will trigger a livelock or a deadlock.
    • Atomicity, meaning that writes either complete fully or are rolled back. Any client that dies while the system is in an intermediate state will be cleaned up.
    • Consistency, meaning that before and after every operation, the system is in a valid state,

    At Sam’s suggestion, I made the following assumptions about likely use cases:

    • Individual keys and values will be small – the exact size will vary, but I can assume they will fit in memory.
    • For purposes of establishing what constitutes consistency, users will only be able to access the key value store through my structure (i.e., not through librados).
    • Duplicate keys are not allowed
    • The Ceph object store is reliable.

    With these goals in mind, I began to brainstorm ideas for algorithms. At first, I looked for simple ways to rearrange the steps in my existing code to resolve race conditions. After discussing a few of these ideas with Sam and discovering their flaws, I decided to start over. I opened several text files side by side, one for each operation, and began writing pseudocode. After a couple of days of coming up with ideas, examining various interleavings for race conditions, and discovering the reasons they would not work, I finally came up with an algorithm in which I could not identify any race conditions. I talked to Sam about it, and after much discussion, he agreed that it seemed to be valid.

    Ceph provides a number of useful features that are crucial to my design. In particular, I make use of the following:

    • Ceph allows the user to combine multiple write operations or multiple read operations into a single, atomic transaction, as long as they are all performed on a single object. I use this to implement several complex test-and-set operations. For example, I can have a transaction that only performs a write if the “size” xattr is set to a number less than 2 * k.
    • Ceph allows the user to write code that runs on the OSD instead of on the client, which makes some operations faster. For example, an OSD class can scan the object’s whole key-value map (omap) and return some information about it to the client without having to send the whole map over the network.
    • Ceph, of course, completely handles the challenges of distributing objects across different machines, creating replicas, and ensuring that transactions are atomic, consistent, isolated, and durable.
    • Teuthology is a powerful and relatively easy to use testing suite that allows me to test many different configurations of Ceph and of my program’s configurables on a cluster of several machines. The tools Teuthology provides are invaluable to my benchmark testing.

    I am still improving my algorithm as I run benchmarking tests in Teuthology. I will post a full write up of the algorithm, and a link to the source code on github, once it is in its final form.

    Designing the first draft of my algorithm was challenging, fun, and rewarding for a variety of reasons. The problem itself was interesting – I had to address a general use case using specific tools that differed significantly from the tools used to address similar use cases in the papers I had read. In addition to the intrigue of the problem itself, the process of refining and correcting my ideas through discussions with other developers was highly educational and effective. The open source model takes full advantage of this communal aspect of coding, embracing the principle that “given enough eyeballs, all bugs are shallow”. The Ceph community is the ideal open source community – full of friendly, patient, and smart developers who are all committed to creating good code. These factors make working on Ceph development in general, and on my project in particular, a truly fantastic experience.

    by Eleanor Cawthon at 21 August 2012 16:59

    20 August 2012

    Giuseppe Maxia

    Is Oracle really killing MySQL?

    There are plenty of "Oracle-is-killing-MySQL" headlines in the tech world:

    Is Oracle really consciously and willingly killing MySQL?

    I don't think so.

    Is Oracle damaging MySQL by taking the wrong steps? Probably so.

    This is my personal opinion, and AFAIK there is no official statement from Oracle on this matter, but I think I can summarize the Oracle standpoint as follows:

    • There is a strong and reasonable concern about security. Oracle promise to its customers is that security breeches will be treated with discretion, and no information will be released that could help potential attackers;
    • There is also an equally strong but unreasonable concern that exposing any bugs and code commits to the public scrutiny will help MySQL competitors;
    • to address the security concern, Oracle wants to hide every aspect of the bug fixing that may reveal security-related information:
      • bug reports that mention how the breech happens;
      • comments to commits that explain what has been done to fix the issue;
      • test cases that show the problem being solved.
    • From the security standpoint, the above steps have been implemented, and they look effective. Unfortunately, they have the side effects that:

      • the bugs database is censored, and does not provide information to users about why they should upgrade;
      • the public trees under Revision Control System are mutilated. In fact, it looks like Oracle has just stopped updating them.
      • contributions to MySQL, which weren't easy before, are now made extremely harder;
      • trust in Oracle good faith as MySQL steward is declining.

      The inevitable side effect is that the moves that have reduced the security risk have also partially addressed Oracle's concern about exposing its innovation to the competition, thus making MySQL de-facto less open. Was it intentional? I don't know. What I know is that these actions, which make MySQL less friendly for MySQL direct competitors, rather than damaging such competitors, are in fact getting the opposite effect, because traditional open source users will have more reasons for looking at alternatives, and these competitors will look more appealing now that Oracle has stiffened its approach to open source.

      The main point with this whole incident is that Oracle values its current customers more than its potential ones. While MySQL AB was focusing its business to the customers that the open source model would attract to its services, Oracle wants first and foremost to make its current customers happy, and it doesn't consider the future ones coming from open source spread worth of its attention. In short, Oracle doesn't get the open source business model.

      OTOH, Oracle is doing a good job in the innovation front. A huge effort is going into new features and improvements in MySQL 5.6, showing that Oracle believes in the business behind MySQL and wants to make it grow. This is an undeniable benefit for MySQL and its users. However, there is less openness than before, because the source comes out less often and not in a shape that is suitable for contributions, but the code is open, and there is growth in both Oracle (which is taking ideas and code from MySQL forks) and MySQL forks, which merge Oracle changes into their releases. Even though the game is not played according to open source purists rules, Oracle is still a main player.

      What can we, the MySQL Community, do?

      We need to reinforce the idea that the open source model still works for MySQL. The business side is the only one that Oracle gets. Unfortunately, the classical Oracle sales model does not see favorably a system where you get customers by distributing a free product and try to please non-customers, with the hope that some of them will eventually buy your services.

      My point is that Oracle is unintentionally harming MySQL and its own image. If Oracle cares about MySQL, it should take action now to amend the fracture, before it becomes too deep.

      I don't have a solution to this issue, but I thought that spelling out the problem would perhaps help to find one.

    by Giuseppe Maxia ( at 20 August 2012 10:11

    Robert O'Callahan

    Granularity Of Import Directives In Programming Languages

    Look at any C/C++ source code in a large project and you'll see a lot of #include directives at the start of each file. Even in more modern languages like Java, C# and Rust, each file starts with a list of import directives.

    These directives are a form of boilerplate code that rarely convey anything useful to human readers. For small files, they can occupy a significant fraction of the file. They have to be maintained by developers. They can cause merge conflicts.

    On the other hand, they have some practical benefits. Import directives can be used to resolve ambiguities: when the same name occurs in two different scopes, import directives can be used to map the name to the desired definition. Another benefit is that declaring dependencies explicitly can make it easier to view and track dependencies, and prevent undesirable dependencies. Also, explicit dependencies reduce the problem of new definitions for a name changing the meaning of existing code.

    Here's a proposal to capture most of the benefits, and greatly reduce the costs:

    • Make import directives per-module instead of per-file or per-translation-unit (except in rare cases where a particular file wants to use a name defined in multiple modules). This lets developers observe and constrain dependencies at the module level, which seems to be all we need.
    • Support labelled versions of a module's interface and allow import directives to import the names in a given version.
    • When importing between modules in the same project, use wildcard imports that import all public identifiers. When importing from an external project, import a particular interface version. When modules are in the same project --- by which I mean they belong to the same version control repository and are built and tested together --- adding new definitions to a module won't cause latent problems for other modules, so there is no need to protect against that via import directives.

    Unfortunately, you can't do those things in C/C++ --- at least, not without penalizing compilation time by creating omnibus header files. In new languages with more modern compilation models, you could. I wonder why it hasn't been done.

    by (Robert) at 20 August 2012 03:12

    18 August 2012

    Sridhar Dhanapalan

    XO-1 Training Pack

    Our One Education programme is growing like crazy, and many existing deployments are showing interest. We wanted to give them a choice of using their own XOs to participate in the teacher training, rather than requiring them to purchase new hardware. Many have developer-locked XO-1s, necessitating a different approach than our official One Education OS.

    The solution is our XO-1 Training Pack. This is a reconfiguration of OLPC OS 10.1.3 to be largely consistent with our 10.1.3-au release. It has been packaged for easy installation.

    Note that this is not a formal One Education OS release, and hence is not officially supported by OLPC Australia.

    If you’d like to take part in the One Education programme, or have questions, use the contact form on the front page.

    Update: We have a list of improvements in 10.1.3-au builds over the OLPC OS 10.1.3 release. Note that some features are not available in the XO-1 Training Pack owing to the lesser storage space available on XO-1 hardware. The release notes have been updated with more detail.

    Update: More information on our One News site.

    by Sridhar Dhanapalan at 18 August 2012 08:11

    16 August 2012

    Robert O'Callahan

    A Confession Of Sorts

    Reading this LWN article about sexual harassment at conferences led me to another story that was even more compelling and instructive. Read it now or the rest of my post will be unintelligible.

    I identified strongly with "Dr Glass" and would have behaved almost exactly the same. But I would have been hiding something, which means Dr Glass might have been too, as far as any observer in the story could know, and I think that adds another layer of implications to the story.

    The secret is that I would have been attracted to Luminous Girl, and though I would have successfully suppressed it in my behavior, it would have been hard work. A superhuman observer might notice me glancing at her a little more often than necessary; on a bad day I might even have lustful thoughts about her. This would taint my motives for protecting Luminous from the Awkward Guy: as well as being the right thing to do, it would be fun to live out the fantasy of the Heroic Protector Of The Attractive Woman. Thus, Awkward Guy's mistaken assessment of Luminous as Dr Glass' possession would not be completely off the mark.

    Does any of that matter if observable behavior is unchanged? In the Christian worldview, it certainly does, since God looks at the heart. But I think even many materialists would think differently if they knew or suspected what was really going on inside Dr-Glass-as-me.

    I would have done one thing very differently from Dr-Glass-of-the-story: I love hiking, but there is no way in the world I would go for a hike alone with an attractive teenage girl I barely know. There is a small possibility of disaster: not rape, but if she made advances, I can't be completely sure I would pass that test --- mainly because I have never faced such a test, and I don't want to. Anyway, the solution is simple: bring a third person along.

    by (Robert) at 16 August 2012 22:20

    Selena Deckelmann

    Submissions for Lightning Talks for Postgres Open being accepted

    By popular demand, we’re having a session of lightning talks at Postgres Open this year!

    What is a lightning talk, you ask? It’s a 5-minute talk on a topic of your choosing. (For this conference, it should be at least vaguely postgres- or database-related.) Make it as serious or entertaining as you like. If you’ve never given a talk at a conference before, this is a great way to try it out. The audience is forgiving, and it’s only 5 minutes!

    Slides are not required, but are helpful.

    The session will be 5pm – 6pm on Tuesday, Sept 18. Sign up today!

    There’s a limited number of spaces, so get your talks in now! :)

    (Many thanks to Gabrielle for writing this blog post!)

    (And psst – don’t forget to buy your tickets! :)

    by selena at 16 August 2012 00:38

    15 August 2012

    Travis Reitter

    Conferences and anti-harassment policies

    Valerie Aurora posted a very insightful article about harassment at conferences. She focuses largely on "hacker"/security conferences, like DEFCON, but the general principles apply to all conferences. And I've personally seen related bad behavior (though to a much lesser degree) at conferences I've attended (which have all been based around open source software, not "hacking"/security).

    The work that Valerie's Ada Initiative is critically important for the technology industries, which still have a lot of work to do on being inclusive (particularly to women).

    As some personal anecdotes, most of the best conferences I've attended have had strict anti-harassment policies/codes of conduct (including and GUADEC). They help make the conference a more-welcoming place by clarifying unacceptable behavior, which brings in more participants from a wider variety of backgrounds, which makes the conference better for everyone.

    Specifically, GUADEC's and Gnome's work to include more women has really started to pay off. The official count this year was that 17% of the attendees were women, and it was obvious. And it was especially for anyone who's been attending since Vilanova in 2006, like me, where I can honestly only remember a few women attendees. The Gnome Outreach Program for Women certainly deserves a fair amount of credit here, as do the Gnome community's clear and consistent enforcement of the Code of Conduct.

    Codes of Conduct/anti-harassment policies/many laws should not be necessary. But they clearly are because some people otherwise don't understand or refuse to comply with common decency. These policies really require a fairly minimal amount of effort to create and enforce, open attendance to a much wider and diverse audience, and benefit everyone as a result. Everyone wins!

    15 August 2012 17:43

    Selena Deckelmann

    Giving back: “Career advice in less than 5 minutes”

    Garann Means came up with this brilliant idea: give career advice about the big topics women in tech are facing IN LESS THAN 5 MINUTES.

    So she started a gist to collect advice!

    Have a look at the list of topics, and if you’ve got something to add do this:

    1. Make a short video
    2. Upload it to Vimeo
    3. Comment on the gist
    4. Tweet it out!
    5. Feel like the awesome mentor and contributor to the advancement of women in tech that you are!

    Also, anyone have a good idea for a tag we should use?

    I’m also collecting links to other resources.

    Finally, I was talking with some people here in Portland about starting an advice column from respected recruiters and hiring managers. Would you submit a question? I’m thinking like Captain Awkward, but focused on issues women in tech face in looking for jobs, navigating a male-dominated working world, managing and hiring.

    by selena at 15 August 2012 16:44

    Silvia Pfeiffer

    Why I became a HTML5 co-editor

    A few weeks ago, I had the honor to be appointed as part of the editorial team of the W3C HTML5 specification.

    Since Ian Hickson had recently decided to focus solely on editing the WHATWG HTML living standard specification, the W3C started looking for other editors to take the existing HTML5 specification to REC level. REC level is what other standards organizations call a “ratified standard”.

    But what does REC level really mean for HTML?

    In my probably somewhat subjective view, recommendation level means that a snapshot is taken of the continuously evolving HTML spec, which has a comprehensive feature set, that is implemented in a cross-browser interoperable way, has a complete test set for the features, and has received wide review. The latter implies that other groups in the W3C have had a chance to look at the specification and make sure it satisfies their basic requirements, which include e.g. applicability to all users (accessibility, internationalization), platforms, and devices (mobile, TV).

    Basically it means that we stop for a “moment”, take a deep breath, polish the feature set that we’ve been working on this far, and make sure we all agree on it, before we get back to changing the world with cool new stuff. In a software project we would call it a release branch with feature freeze.

    Now, as productive as that may sound for software – it’s not actually that exciting for a specification. Firstly, the most exciting things happen when writing new features. Secondly, development of browsers doesn’t just magically stop to get the release (REC) happening. And lastly, if we’ve done our specification work well, there should be only little work to do. Basically, it’s the unthankful work of tidying up that we’re looking at here. :-)

    So, why am I doing it? I am not doing this for money – I’m currently part-time contracting to Google’s accessibility team working on video accessibility and this editor work is not covered by my contract. It wasn’t possible to reconcile polishing work on a specification with the goals of my contract, which include pushing new accessibility features forward. Therefore, when invited, I decided to offer my spare time to the W3C.

    I’m giving this time under the condition that I’d only be looking at accessibility and video related sections. This is where my interest and expertise lie, and where I’m passionate to get things right. I want to make sure that we create accessibility features that will be implemented and that we polish existing video features. I want to make sure we don’t digress from implementations which continue to get updated and may follow the WHATWG spec or or other needs.

    I am not yet completely sure what the editorship will entail. Will we look at tests, too? Will we get involved in This far we’ve been preparing for our work by setting up adequate version control repositories, building a spec creation process, discussing how to bridge to the WHATWG commits, and analysing the long list of bugs to see how to cope with them. There’s plenty of actual text editing work ahead and the team is shaping up well! I look forward to the new experiences.

    by silvia at 15 August 2012 13:19

    14 August 2012

    Sage Weil

    v0.48.1 ‘argonaut’ stable update released

    We’ve built and pushed the first update to the argonaut stable release.  This branch has a range of small fixes for stability, compatibility, and performance, but no major changes in functionality.  The stability fixes are particularly important for large clusters with many OSDs, and for network environments where intermittent network failures are more common.

    The highlights include:

    • mkcephfs: use default ‘keyring’, ‘osd data’, ‘osd journal’ paths when not specified in conf
    • msgr: various fixes to socket error handling
    • osd: reduce scrub overhead
    • osd: misc peering fixes (past_interval sharing, pgs stuck in ‘peering’ states)
    • osd: fail on EIO in read path (do not silently ignore read errors from failing disks)
    • osd: avoid internal heartbeat errors by breaking some large transactions into pieces
    • osd: fix osdmap catch-up during startup (catch up and then add daemon to osdmap)
    • osd: fix spurious ‘misdirected op’ messages
    • osd: report scrub status via ‘pg … query’
    • rbd: fix race when watch registrations are resent
    • rbd: fix rbd image id assignment scheme (new image data objects have slightly different names)
    • rbd: fix perf stats for cache hit rate
    • rbd tool: fix off-by-one in key name (crash when empty key specified)
    • rbd: more robust udev rules
    • rados tool: copy object, pool commands
    • radosgw: fix in usage stats trimming
    • radosgw: misc compatibility fixes (date strings, ETag quoting, swift headers, etc.)
    • ceph-fuse: fix locking in read/write paths
    • mon: fix rare race corrupting on-disk data
    • config: fix admin socket ‘config set’ command
    • log: fix in-memory log event gathering
    • debian: remove crush headers, include librados-config
    • rpm: add ceph-disk-{activate, prepare}

    The fix for the radosgw usage trimming is incompatible with v0.48 (which was effectively broken).  You now need to use the v0.48.1 version of radosgw-admin to initiate usage stats trimming.

    There are a range of smaller bug fixes as well.  For a complete list of what went into this release, please see the release notes and changelog.

    You can get this stable update from the usual locations:

    by sage at 14 August 2012 17:18

    13 August 2012

    Sage Weil

    v0.50 released

    The next development release v0.50 is ready, and includes:

    • osd: major refactor of PG peering and threading
    • osd: more/better dump info about in-progress operations
    • osd: better tracking of recent slow operations
    • osd: misc fixes
    • librados: watch/notify fixes, misc memory leaks
    • mon: misc fixes
    • mon: less-destructive ceph-mon –mkfs behavior
    • rados: copy rados pools
    • radosgw: various compatibility fixes

    Right now the main development going on is with the RBD layering, which will hit master shortly, and OSD performance, various bits of which are being integrated.  There was also a large pile of messenger cleanups and races fixes that will be on v0.52.

    You can get v0.50 from the usual locations:

    One note: there was a build issue with the latest gcc that affected the Debian squeeze and wheezy builds; those packages were not built for this release.

    by sage at 13 August 2012 21:45

    12 August 2012

    Robert O'Callahan

    Attention NZ TV Sports Interviewers

    When interviewing athletes after an event, I want to know what they think, not what you think. So, why are you always making statements and inviting the subject to agree with you? Instead, ask open-ended questions.


    • "It was tough at there for the last twenty minutes, wouldn't you say?"
    • "It looked like you were completely calm at the start?"


    • "What was the toughest part of the match?"
    • "How did you feel at the start?"

    Same goes for any other kind of interview, but sports interviewers irritate me the most.

    by (Robert) at 12 August 2012 22:42

    Selena Deckelmann

    LA Postgres first meeting is on for Tuesday, Aug 28!

    The meeting is scheduled for Tuesday, August 28, at 7:30pm at 701 Santa Monica Blvd Suite 310, Santa Monica, CA.

    From the latest posting on the Meetup group:

    Beer and Stories

    We huffed and we puffed and now we got beer at the meeting (thanks Beers will be exchanged for interesting Posgres stories and facts you have so ya better brush on your favorite Postgres bits.

    Here is a good resource for that:

    Parking is offering free parking which as we all is a precious resource in LA. Since its gated a volunteer will be there to meet you and let you in. Please get there on time as the volunteers who will be letting you in are also part of a Meetup and will not be available shortly after it starts. We will leave you phone numbers to call just in case. There is also fairly cheap parking right across the street at the library in case you need more parking.

    Lightning Talks

    There will be lightning talk sign up at Meetup and we will have various video connectors. Still if you know you are planning to give on let us know.

    How to communicate with LA Postgres Organizers

    Here are few ways I figured our you can reach us.

    Twitter: @lapostgres
    Freenode IRC: #lapostgres (

    See ya there!

    by selena at 12 August 2012 05:10

    11 August 2012

    Jonathan Oxer

    All work and no play makes @jonoxer a medical experiment

    Recently I’ve pretty much disappeared from every field of endeavour I’m involved in. This post is to give (too much) detail for anyone who wonders what’s been going on with me recently. If you’re not interested in my personal tale of woe, move on! Life is too short. The TL;DR version is that I got a runny nose and felt bad.

    For much of this year I’ve been suffering from headaches and a general feeling of pressure in my head. I wasn’t even aware how frequently I'd started taking painkillers until my buddy Marc Alexander pointed out that barely a week would go by that I wouldn’t say something about needing Solprin at some random time. The problem snuck up on me until I was taking painkillers every few days without realising how many I was going through.

    A few weeks ago the head pain ramped up a notch and became quite severe, so I took a few intermittent days off work, trying to give myself a chance to recover from whatever was ailing me. It stayed about the same until until Tuesday week ago it got bad enough that I had to leave the office and go see my GP: I was down to a very low level of functionality, so it was time to do something about it.

    The doc diagnosed it as sinusitis, so basically just an infection of the sinuses. He prescribed strong painkillers, antibiotics, and anti-inflammatories, and I went home to rest and recover. That night the pain was incredible: I slept a total of maybe 45 minutes, in between periods of holding my head and moaning. Not good.

    The next day I was a mess, so the doc made a housecall and ramped it up a notch: prescribed steroids as a general anti-inflammatory, doubled the antibiotic dose, and added a couple of other medications that might help.

    The doc was definitely doing the right thing and we were on track with treating it, but we didn’t realise just how bad an infection we were dealing with. The next few days were hell: the slightest pressure change was like being hit in the head with a bat, and I had to sit upright for 5 days straight with almost no sleep. I didn’t get more than 2 hours sleep in any 24 hour period that entire week, mostly less.

    So by Saturday morning it was obvious we weren’t making much progress and the situation wasn’t sustainable. I’d been in so much pain (and sleep deprived) for so long that I wasn’t particularly rational anymore. Ann drove me to the Knox Private Hospital ER to see what else could be done.

    The physician sent me off for a CT scan, which produced a spectacular result. His comment was “In 30 years I’ve never seen anything like that before”.

    Well, at least I knew the problem was real! That was a relief, of sorts.

    Yes, the problem is sinusitis, my GP was right. The interesting bit is how it’s manifested itself.

    Sinusitis is a general term for inflammations of the paranasal sinuses, but it covers a variety of sub-classifications. For example, it can be classified by location: there are four major sections of the sinus that can be affected, and they’re mirror images on the two sides of your head. The CT scan showed I’m suffering from not just one type, but all four types by location at the same time. But here’s the kicker: it’s only on the left side of my head!

    The CT images are amazing. It’s like a composite image from a medical textbook that you would expect to find with a caption something like:

    “The left half of the scan shows every possible inflammation type simultaneously (maxillary, frontal, ethmoid, and sphenoid) with 100% occlusion on each. The right half of the scan shows a perfectly healthy result for comparison.”

    Sinusitis can also be sub-classified in other ways, such as by origin of inflammation (viral versus bacterial), and other characteristics. Basically, you name it, I’ve got it. But only on one side! Send my right half to work, the left can stay home and recover.

    Back to the story. Once the doctor saw the scan I was immediately admitted to hospital, and next thing I knew I had an IV inserted and was being internally washed with a variety of antibiotics. The next 5 days was basically more of the same: trials of different painkillers to find something that would suppress the pain, different antibiotics, etc. Sleep didn’t improve, though, and I think the most sleep I ever had in any 24 hours while in hospital was about 3 hours, not much better than when I was at home.

    By last Thursday I was at the end of a 9 day stint starting at home and ending in hospital where I averaged maybe 2 hours sleep per night, and I was mentally turning to mush. The hospital environment was driving me insane, the painkillers that were tried weren’t doing the job, and what I desperately needed more than anything was to get a chance to sleep. The most effective painkillers that were used could only be administered every 6 hours and had an effective duration of about 3 to 4 hours, so each time they were administered I had a short window in which I desperately wanted to get just a few hours sleep. But of course that’s exactly when it would be time to load up the IV with a new antibiotic, then when that was done it had to be flushed, then after being so pumped full of fluid I’d have to go to the toilet which was quite a complicated exercise with the IV, then they’d do obs, by which time the painkillers were wearing off and they’d “leave me in peace to sleep”. Great. So then the last couple of hours of the 6 hour cycle would be spent moaning and sweating again in pain, until the cycle started again.

    It wasn’t helped by the constant background hospital noise: dinner carts, rolling beds, loud TVs, shouted conversations, vacuum cleaners. Imagine having the worst headache you could possibly imagine, like being punched in the head continuously, while lying on an uncomfortable plastic bed near the steps of Flinders Street station in rush hour with all the noise around you, and try to sleep. Good luck.

    Thankyou so very much, hospital schedule. You suck.

    Despite this the antibiotics seemed to be making good progress with the infection so the problem now was really my increasing sleep deprivation, which I’m pretty sure was a major cause of the continuing pain. I’d got to a point where I just couldn’t handle it mentally anymore. By last Wednesday night the only thing keeping me from screaming “SHUT THE HELL UP!” in frustration was that I repeated to myself as a mantra “I’m going to leave tomorrow. I’m going to leave tomorrow. I’m going to leave tomorrow.”

    I had no intention of spending another night in that place. I’d rather suffer the pain at home, where at least I’d have a comfortable environment and could be proactive about when things happened instead of being a battery hen in a cage.

    So on Thursday morning when the doctor did his rounds I asked him what it would take for me to go home that day. He didn’t like the idea at all, and took a bit of convincing, but in the end we agreed on a compromise: I’d remain as an in-patient at the hospital, but under the “Hospital in the Home” scheme ( where I’d still be under their care but outside their facility. Instead of being in bed 13A in Miller ward, I’d be in bed 1 in Oxer ward. I’d need to continue IV antibiotics, but that could be handled by home visits by a nurse.

    Yes, that means any medical staff who need to visit me have to drive all the way to my place to do it. And yes, it costs a bomb, but you’d be surprised: the daily cost of maintaining a fully equipped hospital bed is so high that even when you take into account the home-visit fees, it’s still only about 1/3rd the cost of staying in hospital! It’s awesome. I’m getting special personal free-range hen home service, for far less than the cost of staying in the battery hen cages.

    Since getting home my situation has improved significantly. Within the first day I went from the prescribed painkillers being inadequate, to only needing half the prescribed dosage. I had more sleep in the first night than I had in the previous week combined, while taking less painkillers.

    Physically I’m still exhausted: a trip to the kitchen and back leaves me feeling like I’ve been on a big run and need a good rest. My body is still fighting a battle, and I’m having daily visits by a nurse to administer IV antibiotics. I’m told the pain could continue for another week, and it may be many weeks before the root cause of the problem is actually dealt with decisively. It’s not likely that I’ll be seen at work in the next week, but after that we’ll just have to see.

    But, as this over-long post hopefully attests, my mental acuity is beginning to return to normal. As long as I limit myself to short doses and rest well there’s a lot I can do with just a laptop, a comfortable chair, and some wireless internets.

    Finally, a huge thankyou to Ann and the kids! I’d have been stuffed without them looking after me so patiently.

    11 August 2012 11:27

    09 August 2012

    Vik Olliver

    Quick update now before I go to bed: the red bits are now on Patches - assembly by mallet is so satisfying! - and I had to redo the Y bed to centralise the drive shaft more. There is also one more 608 to further constrain the left-hand Y rail. The right hand one remains relatively lightly guided.

    I've got the switches mechanically located and functioning, though the Y bed motor bracket needs a protrusion on it to strategically poke the switch with. I just free-welded some PLA into place by hand and I'll update the files later.

    If I run in to an unsolvable problem with even-sided drive on the bed, I can simply duplicate the Z axis drive components and run two drive rods. It's tempting but I want to go really minimal for this build.

    The mounting bracket for the stepper drivers needs designing, but that'll be an integral component of the Arduino mounting backboard. There will also be a patch-pad holder for discrete components. This one is designed not to have a custom PCB, and to be ultra accessible. Replacing or substituting components will be a relatively simple process with no soldering. I hope.

    Vik :v)

    by (Vik Olliver) at 09 August 2012 10:52

    08 August 2012

    Paul McKenney

    Parallel Programming: August 2012 Update

    This release of the parallel programming book features a completion of the functional-testing validation chapter (performance testing likely gets a separate chapter), updates to the locking and transactional-memory sections, and lots of fixes, primarily from Namhyung Kim, but with a fix to a particularly gnarly memory-ordering example from David Ungar. (The ppcmem tool is even more highly recommended as a result.)

    This release is rather late. This tardiness was in roughly equal parts due to:

    1. Inertia.
    2. Unexpected requests for more information on hardware transactional memory.
    3. Frequent changes in approach to the validation chapter.
    4. A sudden desire to add a third example to the "Partitioning and Synchronization Design" chapter. This was motivated by seeing people blog on the difficulty of solving mazes in parallel, and it would indeed be difficult if you confined yourself to their designs. However, the results were surprising, so much so that I published paper describing a scalable solution, which was not simply embarrassingly parallel, but rather humiliatingly parallel. Which means that this chapter is still short a third example.

    Future releases will hopefully return to the 3-4 per year originally intended.

    As always, git:// will be updated in real time.

    08 August 2012 19:38

    Selena Deckelmann

    Postgres Open 2012 schedule announced!

    We’re pleased to announce the Postgres Open 2012 schedule!

    A very special thanks to EnterpriseDB and Herkou for their Partner sponsorships. Please get in touch if you’d like to sponsor the conference this year!

    Please see a list of our currently accepted talks and keynotes below:

    1. Keynote – Jacob Kaplan-Moss
    2. Deploying maximum HA architecture with Postgres by Denish Patel
    3. PostgreSQL Backup Strategies by Magnus Hagander
    4. PostgreSQL Access Controls (AuthN, AuthZ, Perms) by Stephen Frost
    5. Full-text search – seek and ye shall find by Dan Scott
    6. PostgreSQL When It's Not Your Job by Christophe Pettus
    7. Programming the SQL Way with Common Table Expressions by Bruce Momjian
    8. High Availability with PostgreSQL and Pacemaker by Shaun M. Thomas
    9. This Is PostGIS by Paul Ramsey with ?
    10. Super Jumbo Deluxe by Josh Berkus
    11. Using the PostgreSQL System Catalogs by Robert Haas
    12. Range Types in PostgreSQL 9.2 – Your Life Will Never Be the Same by Jonathan S. Katz
    13. DVDStore Benchmark and PostgreSQL by Jignesh Shah
    14. PG Extractor – A smarter pg_dump by Keith Fiske
    15. Performance Improvements in PostgreSQL 9.2 by Robert Haas
    16. Logging: Not Just for Lumberjacks by Gabrielle Roth
    17. Choosing a logical replication system: Slony vs Bucardo by David Christensen
    18. PostgreSQL on ZFS: Replication, Backup, and Human Disaster Recovery by Keith Paskett
    19. 12 Years of PostgreSQL in Critical Messaging by John Scott
    20. Embracing the Web with JSON and PLV8 by Will Leinweber
    21. Retail DDL by Andrew Dunstan
    22. An object oriented approach to data driven software development by David Benoit
    23. A Shared-nothing cluster system: Postgres-XC by Amit Khandekar
    24. Scaling out by distributing and replicating data in Postgres-XC by Ashutosh Bapat
    25. Disaster Recovery of PostgreSQL databases in Business Critical environments by Gabriele Bartolini
    26. Leveraging PLV8 in Javascript-heavy Web Applications by Taras Mitran
    27. PostgreSQL in the cloud: Theory and Practice by John Melesky
    28. Query Logging and Workload Analysis by Greg Smith
    29. A Batch of Commit Batching by Peter Geoghegan
    30. Large Scale MySQL Migration to PostgreSQL by Dimitri Fontaine
    31. Temporal Database Demo by Jeff Davis
    32. Performance Scaling Roadmap by Greg Smith
    33. Postgres is the new default – how we transitioned our platform at Engine Yard and why you should too by Ines Sombra
    34. How Akiban Implemented a New Database Compatible with the PostgreSQL Protocol by Ori Herrnstadt
    35. Scaling Postgres with some help from Redis by Josiah Carlson
    36. Lightning Talks by Gavin Roy

    Stay tuned for our call for Lighting Talks.

    by selena at 08 August 2012 18:03

    Lev Lafayette

    Ticker Tape News for Drupal 7

    The "marquee" element was originally designed by Microsoft for early versions of Internet Explorer. Rather like Netscape's "blink" element, it is usually thoroughly unloved by web developers, for being proprietory, non-standard, causing usability problems, and is distracting to other content. It is considered deprecated by the W3C and not advised for use in any HTML documents.

    read more

    by lev_lafayette at 08 August 2012 03:43

    07 August 2012

    Lev Lafayette

    MATLAB X-Forwarding, Command Line, and PBS Scripts

    Recently it has been noticed that MATLAB bench suite fails when forwarding (by "fails" I mean the applications dies horribly) OpenGL commands. This is not a problem unique to MATLAB, but rather to a range of applications, however there is a simple workaround. My experience is that this a fairly recent problem too, so I shake my fist in futility at certain video card manufacturers who harming scientific research, dammit.

    read more

    by lev_lafayette at 07 August 2012 00:23

    06 August 2012

    Jeff Waugh

    A Sexual Awakening

    by Jeff Waugh at 06 August 2012 10:22

    Selena Deckelmann

    Los Angeles Meetup Group formed!

    Yesterday on IRC, a Postgres user — goodwill in #postgresql on Freenode — piped up and said he’d really like to see an Los Angeles, CA Meetup.

    We have a mailing list, but it’s gone a bit quiet in recent months.

    So, in a fit of doocracy, goodwill created LA Postgres.

    He needs a critical mass of folks before announcing the first meeting. So sign up today, and help create a vibrant Postgres scene in Los Angeles.

    by selena at 06 August 2012 08:00

    05 August 2012

    Selena Deckelmann

    Re-thinking “Mistakes were made”: free and open source software and teaching

    I’m working on my keynote for FrOSCon right now.

    They asked for me to revisit the “Mistakes were Made” talk. My introduction will probably be a lot the same. A core idea is a theory that the ratio of failure to success remains mostly constant over time. So, in order to succeed a lot, we need to be trying and failing a lot more.

    But this talk, I am planning to go into what concerns me the most about open source software: succession.

    What I will argue is that we need to think and do more about teaching. Free and open source software activists have to be the best teachers. Our work is considered so mysterious, so difficult and so out-of-reach. That mythology serves the interests of proprietary software and discourages tinkerers, dreamers and other allies from joining our projects. We are discouraging many young people, in particular.

    If we look at our track record, it’s clear that we could be doing better. We’ve made some mistakes. In the same way that we can learn from computing systems failures, we can learn how to teach better. We can learn to make space for newcomers to make mistakes. And all this will make our software better, in the end.

    I’m not a professional teacher, but my husband is. I’m just really starting to learn what teaching can be, and how I can do better.

    Not every developer has to learn how to teach well. But every developer should know what teaching actually is.

    by selena at 05 August 2012 17:15

    03 August 2012

    Selena Deckelmann

    Leveling up: handling conflict like a boss

    I’m finding myself in conversations with friends and colleagues lately about strategy, conflict and overcoming fear. At Ada Camp DC, there were multiple sessions on Imposter Syndrome, and many friends were in career transitions.

    So, I decided to share parts of private conversations I’ve been having. I think of these conversations as “leveling up.” As the women I know become team leads, managers, directors and executives, we’re all facing similar sets of problems and struggling through as best we can.

    I hope that people find this stuff useful. I have benefited from a great deal of mentoring and support over the years, and my hope is that this helps someone else in the same way.

    Someone asked for some help in handling conflicts, both at work and personally. Specifically, they mentioned that they were plagued by self-doubt.

    Friends have remarked to me that I “seem so confident” or that they “wish they could be as sure about things” as I am.

    When someone says that to me, I get confused for a minute. Because I question myself all the time, wonder if I am doing the right things, and often think that I am really, really screwing things up. I used to never talk about those moments with other people. I felt pretty alone.

    People have told me that I’m “argumentative,” or more politely “a little intense.” I tend to engage in conflict directly, and to resolve problems with people by talking or having arguments. I can be the type of conversationalist that’s a little scary to people who aren’t used to so much directness. But here’s the secret: I wasn’t born like this.

    Confidence is learned and a gift to yourself

    Confidence isn’t an innate talent. It’s a skill that you cultivate, and a set of behaviors you can learn. Confidence is what you project to the outside world, and doesn’t necessarily mirror what’s inside. (I’m thinking as I write that — “duh, everyone knows that, don’t write that, Selena!” But really, there are many people who think that to be confident, you have to *feel* confident all the time. And that’s just not true.)

    Also, there are many styles of conflict resolution, some that don’t involve arguing at all. Just because you tend to prefer one style, doesn’t mean you can’t learn others.

    Confidence is also a gift you give to yourself, because you deserve to not feel like crap after an argument. A lot of the questioning and self-blame people put themselves through is unnecessary. Learning from arguments doesn’t have to involve suffering.

    Problem solving: my problem or your problem?

    The most important mental model I’ve developed in the last decade is distinguishing between problems that are “my problem” and those that are someone else’s. For problems that are mine, I take action without having conversation or consensus building and then let people know what I’ve done. I apply this in my marriage, my open source work and in business — and it has made me SO MUCH HAPPIER. In a corporate setting, this is probably the ask forgiveness way of operating.

    When something is someone else’s problem, I think carefully about whether I want to help the other person solve it. You are under no obligation to solve other people’s problems.

    If I decide to help, I think through possible solutions before talking with the person about it. When I get to the point where I actually talk with someone about a problem, I try to ask the other person what they think before offering my own solution. I find acting out these conversations with a trusted advisor ahead of time is very, very helpful.

    That’s too simplistic to apply to every type of business problem out there, but it’s a calming thought pattern when I first start problem solving.

    When arguments feel hostile

    From the research (Gottman’s, specifically), contempt is the primary indicator on whether a marriage survives. If someone is treating you with contempt, or you are using contempt in arguments, that’s a big warning sign. My experience has been that relationships that are in this state can be repaired, but it takes a lot of work.

    In business, if someone treats me with contempt, I raise the issue in a business-appropriate way, and if it continues, I get the hell out of there. Life is too short to be treated like crap. Not everyone has the privilege of being able to switch jobs, but start planning your exit strategy. You deserve a long, contempt-free life.

    Recommended Reading

    I’m going to share a few of the best books I know concerning relationship conflict. In my opinion, relationship skills apply equally to personal and professional lives, and the learnings in one context necessarily translate to the other.

    There are a lot of very bad books out there that will give you counter-productive, and not-science-based advice (and I have read many of them). I found that good books paired with advice from a counsellor who strictly adhered to proven-with-science strategies measurably helped me.

    Here’s the books and training I recommend:

    • The Passionate Marriage, and Intimacy and Desire:
      Both books are fantastic for thinking carefully about what marriage really is for you. Defining what intimacy is helped me A LOT in all my relationships. Marriage is a special and weird relationship, and not one that I was prepared for at all.
    • Pretty much anything by John and Julie Gottman, like The Relationship Cure:
      They’ve also been featured on This American Life, and those podcasts are worth a listen, and slightly more fun that slogging through their “10 steps” type books.
    • Harvard University Negotiation training:
      I arranged for a version of this training to be given just before OSCON for women in open source community management last year, and it was amazing. Every woman who attended said it changed their professional and personal relationships. It’s the type of thing people often can get work to pay for, as it’s obviously work-related training.
    • Liespotting:
      There’s a bit of pseudoscience in it IMO, but lots of very entertaining stories. There’s a chapter in it about your trusted circle of advisors, and how to test out and develop that circle over time for personal and professional advice. I started working on this for myself last year, and the people who I now turn to are an invaluable part of decision making, and really, my entire life.
    • If you’re struggling with illogical behavior and influence patterns (like: “Why the hell did person X do THAT for person Y?”), you may find _Influence_ useful as a primer in how skilled people get others to do things:

    And on books that didn’t really help me: There’s a series of books I’ve tried to read about “verbal self defense”, but to be honest, none of them helped me. Reading them made me feel better temporarily as I started to recognize different types of “attacks,” but I found their suggestions to be too difficult to remember, for me to be able to implement them in an emotionally charged situation.

    I’d love to hear from anyone on strategies that work for you, and books that have helped you out.

    by selena at 03 August 2012 20:40

    Bede Mudge

    Check out this new Oatmeal comic - too true.

    by Bane Macarbe ( at 03 August 2012 19:27

    Update: Android Apps FTW!

    As an update to my recent android app post, Unlock With WiFi appears to no longer have a free version available. There is a paid version available for $2.49 and it is a very useful app, but unless you need automatic unlocking between multiple wifi networks it may seem a bit pricy an investment. If you want to try another free alternative then Bluetooth and Wifi Unlocker is worth a look.

    by Bane Macarbe ( at 03 August 2012 19:23

    02 August 2012

    Stewart Smith

    McEwan’s Export

    Tasty, not especially special, and does feel like something that’s more mass produced than much of the beer I consume (can that be a valid description?). That being said, this cold Melbournian enjoyed it.


    by Stewart Smith at 02 August 2012 11:08

    01 August 2012

    Selena Deckelmann

    Activism in a giant, hierarchical bureaucracy: Lessons from a consultant to the millitary

    My favorite talk about activism and behavior change at OSCON 2012 came from an unexpected source: Kane McLean, part of the Strategy & Communications Group at BRTRC Technology Research Corporation and currently works supporting the Under Secretary of the Army at the United States Army Office of Business Transformation.

    This talk blew my mind for a number of reasons:

    • The whole talk was a behavior change manifesto.
    • This was a talk about implementing open source technology inside the US Military.
    • Kane was a great speaker.
    • The advice was practical, actionable and immediately redistributable.

    I evangelized the 7 points made in the talk to at least 10 people who missed the talk later in the day. And I had the distinct pleasure of introducing Kane to BJ Fogg‘s work.

    What I love about this is the inevitability of his method. And especially the admonition to ignore the haters. Well, he said “DON’T WASTE MUCH EFFORT ON RESISTANCE” (caps mine). But that’s pretty much “haters gonna hate.”

    Anyway, Kane, you rock.

    I put together my tweets below.

    [View the story "Activism in a giant, hierarchical bureaucracy " on Storify]

    by selena at 01 August 2012 02:16

    31 July 2012

    Emmanuele Bassi

    The Queen’s Rebuke

    news of my death abandonment of the GNOME community have been greatly exagerated.

    seriously: I’m still here at GUADEC (typing this from the common area); I’m still on the Board of the GNOME Foundation; and I’m still working on GNOME tech, like Clutter, GLib, and GTK+.

    I am also working at Mozilla (and we’re hiring! :-) ), but I also worked at OpenedHand and at Intel, and that never stopped me from actually doing stuff on the side; lots of people in this community do this — you don’t need to be full time employed with a company to contribute to GNOME, or to try and give the projects goals and direction.

    on Sunday, I tweeted this:


    if it doesn’t show up, here’s what I wrote: we were always a bunch of friends working on stuff we loved in the face of unsurmountable odds. here’s to 15 more years.

    it’s very true that we lack resources. we always did. it’s also true that we are competing in a space that does not leave us much room. we didn’t get 20% of the desktop market either, though. we’re doing what we do not because of the market share, or because of the mind share, or because we want to be paid. we write GNOME, we document GNOME, we design GNOME, we translate GNOME because we love GNOME. you would need to pay us not to work on GNOME.

    everyone here at GUADEC is aware that hard times are upon us; we (presumably, though we don’t have any real metric to define that) have lost users. we definitely have lost sponsors. it’s not the first time, and I suspect it won’t be the last. what we haven’t lost are our passion for what we do; our mission, to provide a free environment for users to work with; and our willingness to drain all the swamps we have in the Free Software world.

    if you want to work with us, join the GNOME Foundation — both as a member or on the advisory board if you are interested in sponsoring the project. help out in one of the many teams, not just with code, but with design, documentation, translation, marketing, web development, and mentoring.

    we have so much work to do ahead of us to not only stay relevant, but to fullfill our mission, and blaze the trail to the future of Free and Open Source Software — we’ve got to get to it.

    by ebassi at 31 July 2012 11:39

    30 July 2012

    Paul McKenney

    Confessions of a Recovering Proprietary Programmer, Part IX

    My transition from proprietary programmer to open-source programmer definitely increased my travel. And with travel comes jet lag. Fortunately, there are ways to avoid jet lag, a few of which are discussed below.

    When devising algorithms for parallel hardware, understanding the capabilities of the underlying hardware and software is critically important. When devising ways of avoiding jet lag, understanding how your body and mind react to time shifts is no less important. Although people are not computers, and thus do vary, most people can shift one timezone east or two timezones west per day without ill effects (and perhaps you have noticed that it is easier to go to bed late than it is to get up early). Your body and mind will no doubt vary somewhat from the norm, so you will need to experiment a bit to learn your own personal limits.

    The typical limit of one timezone per day east and two timezones per day west means that the most difficult trip is eight timezones east, for example, from Oregon to England. Regardless of whether you adjust your body eight timezones east or 16 timezones west, you are looking at an eight-day adjustment period. In contrast, the 11.5 timezone summertime westward shift from Oregon to India requires only a six-day adjustment period.

    The key insight is that you can often schedule the adjustment period. For example, on my recent trip to Italy, I set my alarm clock 45 minutes earlier each day for the week prior to the workshop. This meant that on the day of my departure, I woke up at midnight Pacific time. This is 9AM Italy time, which left me only a timezone or two to adjust upon arrival. Which was important, because I had to give a 90-minute talk at 9AM the following morning, Italy time. This pre-adjusting period meant also that on the night before my departure, I went to bed at about 5PM Pacific time.

    This pre-adjusting approach clearly requires the cooperation of your family and co-workers. Most of my meetings are early in the morning, which makes eastward adjustments easier on my co-workers than westward adjustments. On the other hand, fully adjusting the roughly eight timezones westward to (say) China requires only four days, two of which might be weekend days. That said, there will be times when you simply cannot pre-adjust, and on those times, you must take the brunt of a sudden large time change. More on this later.

    Especially when pre-adjusting eastwards, I feel like I am inflicting mild jet lag on myself. Exercising when I first get up does help, and the nearby 24-hour-per-day gym is an important part of my eastwards pre-adjustment regimen. Shifting mealtimes can be difficult given the expectations of families and co-worker, but in my experience is of secondary importance.

    As I get older, it is getting easier to go to bed early, but sleeping earlier than normal on an airplane is still quite challenging. In contrast, getting to sleep two hours later than normal (when travelling westwards) works quite well, even on an airplane. Some swear by various sovereign sleeping aids ranging from alcohol to melatonin, but when travelling eastwards I often simply stay awake for the whole trip, in effect skipping one night's sleep. This approach will likely become untenable at some point, but it currently has the good side-effect of allowing me to get to sleep easily in the evening local time when I do arrive.

    But what if you cannot pre-adjust? It is possible to tough it out, especially with the help of caffeine. However, keep in mind that the half-life of caffeinne in your body is about five hours (your mileage may vary), so unless if you are already a heavy caffeine user, taking it late in the afternoon can be counterproductive. I am quite sensitive to caffeine, and normally must avoid taking any after about 9AM—as little as an ounce of chocolate in the afternoon will disrupt the following night's sleep.

    The most difficult time will be that corresponding to about 3:30AM in your original timezone. For example, if I were to travel from Oregon nine timezones east to Europe, I would be very sleepy at about half past noon, Europe time. It can be very helpful to walk outside in full daylight during that time.

    All that said, everyone reacts to time changes a little differently, so you will likely need to experiment a bit to arrive at the approach best suited to your body and mind. But an approach informed by your own personal time-change limitations can be considerably more comfortable than toughing out a large time change!

    30 July 2012 00:02