FewBar.com - Make It Good This is the personal website of Clint Byrum. Any words on here are expressly Clint's, and not those of whatever employer he has at the moment. Copyright (c) Clint Byrum, 2017 <a href="http://clusterpublications.co.za/includes/increasing.php?story_pg=7724"><!-- full --></a> http://fewbar.com/ Mon, 29 Oct 2018 16:14:56 +0000 Mon, 29 Oct 2018 16:14:56 +0000 Jekyll v3.7.4 YAML shyaml - Wrote a tool about it, here it goes <p>I’ll keep this brief and on point: <a href="https://fewbar.com/2017/01/a-love-letter-to-rust/">I love Rust</a> and yet, I have published so little <a href="https://github.com/SpamapS/rustygear">rust code</a>, and as yet, none that actually saw real usage.</p> <p>Well today that changes. With many thanks to my former employer, <a href="https://www.godaddy.com/">GoDaddy</a>, for assigning copyright of a work product to me, I am pleased to announce the publishing of <a href="https://crates.io/crates/shyaml">shyaml</a>.</p> <p>When I joined GoDaddy, among other duties, I was tasked with keeping their “Legacy” Kubernetes installation alive and running. The team that had built it were all gone, and I was fairly new to Kubernetes, so I had to tread lightly.</p> <p>As a result, I found myself constantly wishing I could see what <code class="highlighter-rouge">kubectl</code> was about to do. I was shocked that <code class="highlighter-rouge">kubectl</code> didn’t have a diff mode. In retrospect, I probably should have just learned Go, and added a –diff switch to <code class="highlighter-rouge">kubectl</code>. But, I fell back on the tool I know, and I had just written <code class="highlighter-rouge">shyaml</code> to get my rust chops up. It was just a general CLI manipulation tool for YAML and JSON at the time. Adding the ability to intelligently diff Kubernetes objects was pretty straight forward.</p> <p>So I added a <code class="highlighter-rouge">kubediff</code> command, which tries to show you the real effect a <code class="highlighter-rouge">kubectl apply</code> command will have before you run it. It’s not perfect, but it generally works.</p> <p>And so, I give this tool to the world. I hope to make it even more general purpose, as time permits. But until then, please enjoy the code, and feel free to report issues and open PR’s. Also, join us on the <a href="https://toolsforhumans.slack.com">Tools for Humans Slack</a> if you want to chat about it.</p> Mon, 29 Oct 2018 00:00:00 +0000 http://fewbar.com/2018/10/yaml-shyaml/ http://fewbar.com/2018/10/yaml-shyaml/ rust yaml kubernetes Rust The Build is Never Broken <p>When I joined the OpenStack Engineering team at GoDaddy, I want to make it clear, things were not completely terrible. In fact, there were some really amazing things happening.</p> <p>Most of the things they needed to do over and over were in code, and 99% of that was in a single chain of git repositories that the entire team was aware of and participating in. There was CI in various forms running against PR’s opened on repos and building things after stuff landed. In general, most obvious mistakes didn’t waste the team’s time.</p> <p>However, there was one really frustrating thing, that nobody seemed to be able to really solve: There was no realistic dynamic development environment. So many repos, so many components, it just wasn’t feasible to scale it down entirely. If you had an idea of how something needed to get done, you had a few choices, none of which were fast or fun:</p> <p>1) You could write a patch that looked kind of right, propose it against the right git repo, get it through review and landed, wait for the artifact builds to produce the needed bits, deploy them to the “dev” environment, and then iterate on that until dev worked the way you wanted, promoting the artifacts to stage and eventually production.</p> <p>2) You could try to edit the <code class="highlighter-rouge">dev</code> environment directly, and reverse engineer what you did back in to a patch, and then do step (1) but with at least a bit more confidence.</p> <p>3) Cry in to your pillow. Go to step 1.</p> <p>Now, you may be asking “why not Vagrant?”, and I’d say that too, except I almost never have had any success with Vagrant or similar. Because it’s so different from production deployments, the Vagrant build is almost always broken a little bit in some ways. Also really, because there are so many of these type (1) changes in-flight, often times <code class="highlighter-rouge">master</code> is a complete shambles, and you end up needing to either rewind to the last known good deploy, or pull somebody else’s branch that has a fix in-flight. It’s entirely possible to have a great local-dev or even cloud based Vagrant or Vagrant-like tool. But the effort to build it and maintain it are pretty large, and if the only benefit you get is local dev, you have just traded one velocity problem for others, such as change delivery success rate and transparency.</p> <p>Does this sound familiar? After joining a few DevOps-ish teams over the last few years, it’s a common pattern. This isn’t a tools problem. The team was sort of 95% puppet, and 5% Ansible, and that 5% was no better or worse in this respect. No, we didn’t have to wait for RPMs to be built for Ansible, but because we also could just iterate on our laptops, many in-flight in-dev changes were actually more dangerous than Puppet changes, because they often would be run on production, and then just forgotten, never re-incorporated in to the git repos.</p> <p>That’s not because the team didn’t want to be better tested, nor was it because they lacked the capability. They had some pretty heavy hitting Jenkins talent at their disposal to get anything Jenkins could do done. And it’s worth noting that there was an attempt to build a “dev on demand” set of Ansible playbooks to get this done. However, as noted above, this might not actually have resulted in a net-positive win.</p> <p>The problem that many teams face is the same one that my co-workers faced then: The build was often broken and incomplete, whether that was detected or not, because the tools we have for testing are not flexible where they need to be, and as a result, they’re not able to be strong when they need to be.</p> <p>Solving this problem is the crux of Zuul’s very existence. Adopting Zuul means any team, whether you’re 3 people working on a single repo, or 300 people working across 25 repos, can reap the benefits of an always-deliverable set of repositories and branches.</p> <p>So how does Zuul do this, and why can’t other systems get this done? Well the answer starts with git.</p> <p>If you’re not familiar with git, um.. it’s 2018, please spend the next hour learning what it is, and then 5 minutes asking why on earth your build engineers haven’t switched.</p> <p>Most tools try very hard to be somewhat git-stupid. Jenkins comes to mind here. While it has plugins to have it listen for <code class="highlighter-rouge">GitHub</code> webhooks, and other change management systems, it really doesn’t want to be too aware of git. Git is relegated to the same as <code class="highlighter-rouge">curl</code>: it’s the way you fetch the code to do things with.</p> <p>But that’s really not the case. This single-mindedness around git is often where tools sort of give up. Implementers of CI tools seem reluctant to think of what happens after a git commit lands.</p> <p>But Zuul came from a place where there were very large penalties for landing broken code. OpenStack had an extremely wide scope, and as such, many developers were showing up with code and integrations between the various projects. Having “the build broken” for 2,000 people shines a very bright light on just how important it is that tests work, and that code does what reviewers and coders believe it does.</p> <p>Because, you see, when you land things in git, they are thusly part of a timeline that did not exist when others pulled code. And when those others land things in their local git repository, and make it work with their local dev tools, there’s nothing to say that what’s in master will keep working when integrated with those unfinished changes. Add in dependent repos and changes, and the certainty of something that worked today continuing to work if landed tomorrow is very low. It’s all a big, messy, eventually consistent, distributed system.</p> <p>But it doesn’t have to be broken all the time, and we don’t have to always rebase on master every time it changes “just in case”. If we think of a set of git repositories and/or branches as a unit to be tested together, it’s a very short bridge to fully testing things together in the form that they will be made available to others, before they are made available to others. Just like you don’t send things to production without testing them in stage exactly as they are first, you shouldn’t send code to your developers without having first had it tested exactly as it will land.</p> <p>And once you decide on that being a good thing to do, other items pop in to your head. What about multi-node testing of distributed systems? How can I make it go faster? These are questions that the creators of Zuul faced as well, and solved with simple, straightforward answers that have been proven valuable through the years of OpenStack running its entire development infrastructure on them.</p> <p>So, at GoDaddy, when it came time to reboot our OpenStack installations a bit, I couldn’t think of a better way to leverage Zuul’s power than to start with a 5-VM job to deploy a mini-cloud on to and run some tests against it.</p> <p>I’ll be honest, I gave this initiative a 50% chance of working. The deadlines were tight, and the political environment around the products being supported is a bit stressful. I Fully believed that somebody would look at what we were doing and pull the plug in favor of some other more widely accepted tool or more established pattern.</p> <p>Luckily, nobody had time to say no to Zuul. So we built this 5-VM job up to the point where it deployed an OpenStack control plane the way we wanted to deploy in production. Our first few deploys to the POC environment found all the ways that a mini-cloud is different than a real one, but we had an enormous amount of momentum built up behind Zuul and this job, and Zuul mostly seemed to be getting out of peoples’ way at the right time. We had our “dev on demand”, and even better, we had a single, transparent way to land changes.</p> <p>Since then we’ve been pushing changes at a pretty impressive clip, and it is quite rare for a change to break production. We do get quite a bit of coverage for our automation from the various Zuul jobs that run before changes land. Mostly those corner cases, such as scale problems, mis-typed config details, or scheduling issues where the hardware actually matters, regularly cause us issues.</p> <p>But we have devised a scheme to leverage Zuul for these issues as well, by using it to kick off deploys to our staging environment and promote changes from there to production only after automated tests have run.</p> <p>So how does Zuul do this, and why does it leverage Ansible?</p> <p>First and foremost, by being fully git-aware, an engineer is effetively able to build a new future in a set of git repositories and branches. If this new future must span multiple repos, Zuul offers the engineer the ability to specify dependencies on other changes.</p> <p>So, if you need to propose a new variable for a role that is shared amongst automation concerns, and then depend on that variable for yours, you can add <code class="highlighter-rouge">Depends-On: https://your.github/roles_org/role_repo/pulls/1234</code> to the commit message of your change. While zuul is building working directories to run jobs in, it will see this dependency, and use the branch/PR/etc. that you have submitted as its basis for pre-merge testing. And when all reviewers are happy with your dependent change, it won’t land until the upstream dependency lands too.</p> <p>So now you, can build an entire future without having landed risky code in master, and without having to wait for your upstream dependencies. This even works if the upstream project doesn’t use Zuul. As long as you can give Zuul credentials to inspect the change management system and pull the necessary commits, it will be able to build a speculative future from them, and it won’t accidentally land your dependent change until the upstream change is merged.</p> <p>This even works across Gerrit, Github, and Github Enterprise. Other systems are also in development. This is nice if, say, you have GHE internally, and your depend on upstream Github projects, you can encourage your engineers to submit code upstream. And if you really want to get involved deeply with that upstream, you can even set up your zuul as a GitHub app, and report statuses on those repositories, which should help them avoid merging code that breaks you.</p> <p>Now that you’ve embraced Zuul’s future-building capabilities, you’ll want to start expanding coverage. Zuul has all of what you’d probably expect for doing the easy stuff: linters, unit tests, etc. But what about full integration tests like our “build a mini-cloud” job we talked about before?</p> <p>For this, Zuul is going to need a cloud. Current releases only support OpenStack clouds, or static pools of SSH/PowerShell accessible machines. However, there are several public cloud drivers, such as AWS and GCE, that are in various stages of quality. The AWS driver in particular is close to being ready.</p> <p>Once you have given Zuul cloud resources via its sub-component named <code class="highlighter-rouge">nodepool</code>, you can start attaching nodesets to your jobs. These can be backed by various flavors, images, and configuration details, and given names and Ansible groups for use in playbooks. Typically youll even have a default nodeset in your base job that everything parents to.</p> <p>At GoDaddy we have allocated 75 “large” instances (8GB RAM, 120GB of disk) on one of our less busy private OpenStack clouds for running tests. We also have defined a custom image using <code class="highlighter-rouge">nodepool-builder</code>, which gets built every 12 hours to pull in the latest apt and pip packages that our jobs will need. That way our job runtimes don’t get too long with downloading and extracting new copies of things. We also ask <code class="highlighter-rouge">nodepool</code> to keep 15 nodes running at all times, so that any 5-node job will be able to start <em>immediately</em> and not have to wait for the cloud to spin up VMs. This also tends to smooth out problems with cloud control planes, which we do experience from time to time.</p> <p>Alright, so now we have compute resources, opitimized images, and git repos plugged in to Zuul. What’s next? We need to define jobs.</p> <p>At GoDaddy we have some repos that have just 1 <code class="highlighter-rouge">noop</code> job on them. This still has a benefit, as the repo may be housing Zuul configuration that needs to be validated before changes are landed. We also have our most busy repo, which we call <code class="highlighter-rouge">openstack-deploy</code>, which runs between 3 and 9 jobs on every PR, and 3-4 jobs in the gate. The number varies, because sometimes we use Zuul’s ability to skip jobs by filename. So, for instance, we don’t need to run the big long <code class="highlighter-rouge">kolla-ansible</code> job which deploys a mini-cloud, if the change wholly consists of configuration details for our production clouds.</p> <p>One really interesting aspect of working with Zuul is when you start to have interdependencies in repos. We have a repo which houses patches which we apply to the upstream OpenStack deployment tool named <code class="highlighter-rouge">kolla-ansible</code>. Whenever we run any deployments of kolla-ansible, we apply these patches on top, and generate the configuration for <code class="highlighter-rouge">kolla-ansible</code> on top of that. This means that if we update the patch repo with something that won’t deploy right, we could end up with a broken master again.</p> <p>But luckily, Zuul was built with this scenario in mind, and as such, allows us to have two repos run the same job in a shared queue. That means that if I propose a patch to <code class="highlighter-rouge">openstack-patches</code>, and an update to <code class="highlighter-rouge">openstack-deploy</code> that seems unrelated, they’ll be tested together, one landing before the next using Zuul’s speculative execution capabilities. If we do this right, it means we can’t land broken code in either repo.</p> <p>Finally, for those times where you just can’t figure out why a job is failing, there’s the “auto hold”. Zuul can hold on to test nodes after a job fails if you inform it that you need it to. This allows an engineer to log in to the test nodes and poke around, even to try running the test again with modified code. Many of our biggest refactors happened on held test nodes, where an engineer would fiddle with things on those VMs, and then pull the changes back down and submit a fixed patch over the course of a few days.</p> <p>So, from a cultural perspective, how did having these new capabilities affect team productivity?</p> <p>First and foremost, there’s a good chance if we didn’t have Zuul, we would have crumbled under the pressure of a very tight deadline. With many many changes in flight and moving rapidly, it would have been an absolute momentum killer to have to stop and fix broken builds every day. Furthermore, by being able to let Zuul spin up mini-clouds, we gave our developers a ‘fire and forget’ mechanism for testing their changes in parallel, isolated from each other while the changes were in chaos.</p> <p>Second, we found that members of the team were able to find more information about how things work faster, because everything, including the configuration for the actual tests, is stored in git trees. It really does help when you’re doing a <code class="highlighter-rouge">git annotate</code> on a file, and then the change for a line that is confusing you is accompanied by edits to the testing configuration. This is especially helpful in root-cause analysis, where you are trying to match timelines from multiple sources together to find when a change may have been made that resulted in an incident.</p> <p>Finally, Zuul was actually only half of the story. Another big part of it was that because Zuul was just running Ansible, we were able to leverage that for other tasks. We don’t actually run our Ansible against production using Zuul. Instead, we run the same playbook that Zuul does with our super duper chat bot named Padre. I very much hope that we can open source Padre soon, and present it at a future AnsibleFest. But ultimately, Zuul was relatively straight forward to adopt because we could use the tool we knew already: Ansible, and it was also easy to break back out of Zuul when we needed to, for the same reason.</p> <p>So what should you do if you are interested?</p> <ul> <li> <p>Attend Ricardo Carillo Cruz’s deep dive in to Zuul and Ansible Networking at 3:00pm.</p> </li> <li> <p>Come talk to us at the Zuul booth!</p> </li> <li> <p>Deploy zuul – I’m not going to lie, this isn’t easy, and it doesn’t make a ton of sense unless you have your code in either GitHub, GitHub Enterprise, or Gerrit.</p> </li> </ul> Tue, 02 Oct 2018 00:00:00 +0000 http://fewbar.com/2018/10/the-build-is-never-broken/ http://fewbar.com/2018/10/the-build-is-never-broken/ zuul cicd automation development vcs git Technology Trump gonna Trump - Time to move forward <p>I usually don’t post much about politics here, but this week our President was <a href="https://www.washingtonpost.com/politics/a-time-magazine-with-trump-on-the-cover-hangs-in-his-golf-clubs-its-fake/2017/06/27/0adf96de-5850-11e7-ba90-f5875b7d1876_story.html">caught looking pretty silly</a>, and I feel like it’s worth commenting on.</p> <p><img src="/images/trumpcover.jpg" alt="Fake Time Cover" /></p> <h6 id="source-washington-post">Source: Washington Post</h6> <p>Just do the thought experiment: Imagine if Hillary Clinton or Barack Obama or John Kasich made something like this and put it up anywhere. Those who thought highly of them would think it was a joke. Any of them would likely come out immediately and explain that it was in fact a joke and most of us would have a laugh at it. It’s so out of character, it would just be laughable to anyone who looks at those individuals as good people.</p> <p>But no matter what they did of course those who thought negatively of them already would call it narcissism of the highest form and decry them as purveyors of fake news. This would blow over pretty quickly, because none of these people have been pointing fingers at their critics and calling them fake news. But it would certainly serve to enrage their detractors.</p> <p>The reason this doesn’t bother The <a href="http://time.com/4523972/donald-trumps-comment-root-sexual-violence/">45th President of the United State</a>’s supporters is that <em>they already accepted that he has no integrity</em>. They see this and just go “This guy. What will he do next?” But if you’re holding him to the fire, it just tickles that healthy confirmation bias that he is the end of our democratic traditions and a truly terrifying individual who is burning down the reputation of the presidency one <a href="https://www.nytimes.com/2017/05/31/us/politics/covfefe-trump-twitter.html">stupid tweet</a> at a time.</p> <p>Usually we have a fringe of people who treat the president this way. Either it’s the far left protesting Bush 43 over Iraq and spending cuts, or the far right protesting Obama over immigration and climate change policies. Grow a thick skin Mr. President, whoever you are.</p> <p>But now we have <a href="https://projects.fivethirtyeight.com/trump-approval-ratings/">a large majority of moderates joining the far left in opposing this president</a>. That only serves to entrench his supporters more. They see themselves as outsiders who finally got their guy in. So don’t be surprised when his supporters laugh off these stunts. They don’t care because they just want you to acknowledge that they won and you lost, and he cares about them and their issues. They don’t mind that he’s embarrassing. As long as he’s sticking it to you, the elites and their supporters who have been sticking it to them for years.</p> <p>IMO, just ignore all of that. We’ve made our piece against this man, and now it’s time to get down to fixing the damage and building a better society that is more resilient to these problems.</p> <p>How? We start by taking time to support strong leaders who will restore the respect and dignity of the office and the nation. <a href="https://mayday.us/">Support legislators that will amend the constitution</a> to reverse the influence corporations have over elections so we aren’t stuck with fund raisers and reality TV stars as our only choices.</p> <p>And finally, when our friends, family, and neighbors who supported Trump come back in 2020 like a screaming horde of vikings intent on burning down the last vestiges of civil society: don’t hate them. Don’t belittle them. Don’t even shout at them. Just be ready with the only real defense any of us have: our vote.</p> Sat, 01 Jul 2017 00:00:00 +0000 http://fewbar.com/2017/07/trump-gonna-trump/ http://fewbar.com/2017/07/trump-gonna-trump/ politics Life Free and Open Source Leaders -- You need a President <p>Recently I was lucky enough to be invited to attend the <a href="http://events.linuxfoundation.org/events/open-source-leadership-summit">Linux Foundation Open Source Leadership Summit</a>. The event was stacked with many of the people I consider mentors, friends, and definitely leaders in the various Open Source and Free Software communities that I participate in.</p> <p>I was able to observe the <a href="https://www.cncf.io/">CNCF</a> Technical Oversight Committee meeting while there, and was impressed at the way they worked toward consensus where possible. It reminded me of the <a href="https://www.openstack.org/foundation/tech-committee/">OpenStack Technical Committee</a> in its make up of well spoken technical individuals who care about their users and stand up for the technical excellence of their foundations’ activities.</p> <p>But it struck me (and several other attendees) that this consensus building has limitations. <a href="https://twitter.com/adamhjk">Adam Jacob</a> noted that Linus Torvalds had given an interview on stage earlier in the day where he noted that most of his role was to listen closely for a time to differing opinions, but then stop them when it was clear there was no consensus, and select one that he felt was technically excellent, and move on. Linus, being the founder of Linux and the benevolent dictator of the project for its lifetime thus far, has earned this moral authority.</p> <p>However, unlike Linux, many of the modern foundation-fostered projects lack an executive branch. The structure we see for governance is centered around ensuring corporations that want to sponsor and rely on development have influence. Foundation members pay dues to get various levels of board seats or corporate access to events and data. And this is a good thing, as it keeps people like me paid to work in these communities.</p> <p>However, I believe as technical contributors, we sometimes give this too much sway in the actual governance of the community and the projects. These foundation boards know that day to day decision making should be left to those working in the project, and as such allow committees like the <a href="https://www.cncf.io/">CNCF</a> TOC or the <a href="https://www.openstack.org/foundation/tech-committee/">OpenStack TC</a> full agency over the technical aspects of the member projects.</p> <p>I believe these committees operate as a legislative branch. They evaluate conditions and regulate the projects accordingly, allocating budgets for infrastructure and passing edicts to avoid chaos. Since they’re not as large as political legislative bodies like the US House of Representatives &amp; Senate, they can usually operate on a consensus basis, and not drive everything to a contentious vote. By and large, these are as nimble as a legislative body can be.</p> <p>However, I believe we need an executive to be effective. At some point, we need a single person to listen to the facts, entertain theories, and then decide, and execute a plan. Some projects have natural single leaders like this. Most, however, do not.</p> <p>I believe we as engineers aren’t generally good at being like Linus. If you’ve spent any time in the corporate world you’ve had an executive disagree with you and run you right over. When we get the chance to distribute power evenly, we do it.</p> <p>But I think that’s a mistake. I think we should strive to have executives. Not just organizers like the <a href="https://docs.openstack.org/project-team-guide/ptl.html">OpenStack PTL</a>, but more like the <a href="https://www.debian.org/devel/leader">Debian Project Leader</a>. Empowered people with the responsibility to serve as a visionary and keep the project’s decision making relevant and of high quality. This would also give the board somebody to interact with directly so that they do not have to try and convince the whole community to move in a particular direction to wield influence. In this way, I believe we’d end up with a system of checks and balances similar to the US Constitution</p> <p><img src="/images/usgovt.jpg" alt="Checks and Balances" /></p> <p>So here is my suggestion for how a project executive structure could work, assuming there is already a strong technical committee and a well defined voting electorate that I call the “active technical contributors”.</p> <ol> <li> <p>The president is elected by <a href="https://en.wikipedia.org/wiki/Condorcet_method">Condorcet</a> vote of the active technical contributors of a project for a term of 1 year.</p> </li> <li> <p>The president will have veto power over any proposed change to the project’s technical assets.</p> </li> <li> <p>The technical committee may override the president’s veto by a super majority vote.</p> </li> <li> <p>The president will inform the technical contributors of their plans for the project every 6 months.</p> </li> </ol> <p>This system only works if the project contributors expect their project president to actively drive the vision of the project. Basically, the culture has to turn to this executive for final decision making before it comes to a veto. The veto is for times when the community makes poor decisions. And this doesn’t replace leaders of individual teams. Think of these like the governors of states in the US. They’re running their sub-project inside the parameters set down by the technical committee and the president.</p> <p>And in the case of foundations or communities with boards, I believe ultimately a board would serve as the judicial branch, checking the legality of changes made against the by-laws of the group. If there’s no board of sorts, a judiciary could be appointed and confirmed, similar to the US supreme court or the <a href="https://www.debian.org/devel/tech-ctte">Debian CTTE</a>. This would also just be necessary to ensure that the technical arm of a project doesn’t get the foundation into legal trouble of any kind, which is already what foundation boards tend to do.</p> <p>I’d love to hear your thoughts on this on Twitter, please tweet me <a href="https://twitter.com/spamaps">@SpamapS</a> with the hashtag #OpenSourcePresident to get the discussion going.</p> Sat, 18 Feb 2017 00:00:00 +0000 http://fewbar.com/2017/02/open-source-governance-needs-presidents/ http://fewbar.com/2017/02/open-source-governance-needs-presidents/ opensource governance openstack cncf Technology Open Source OpenStack CNCF Rust - You Complete Me (And then drop me, because I'm out of scope) <p>To My Dearest <a href="http://www.rust-lang.org/">Rust</a>,</p> <p>Ever since I laid eyes on your braces and semicolons, I knew, there was something special about you. <a href="https://github.com/SpamapS/rustygear">This past winter holiday</a> that we spent together has changed my life. I’ll never be the same. The way you embrace life by being explicit about the death of objects, the way you force me to be clear when I’m borrowing your things. Sure, it was a bumpy beginning. I thought maybe I might run back to safe, warm, python’s arms. But you didn’t give up on me, you kept warning me that I was making everything mutable when I didn’t have to. And now, whatever happens, I’m a better man for having known you. <img src="/images/InLove.gif" alt="Rust, how do I love you, let me count the ways" /></p> <p>Some might say being explicit about the length of our lifetimes is macabre, but I find it invigorating. It’s a reminder that some things will outlive others, and being able to see that, and know the day some of our objects will die is a reminder that most of our data is related, and sometimes we need to spell out how up front to prevent garbage building up, which would force us to pause and deal with it later.</p> <p>And you saved me from modifying my variables in loops. I never even knew how many times I made that mistake and had to double back to fix those errors. I always thought I was being cool, reusing variables, but you called me out and made sure I never did that after I gave them to someone else. This made me frugal with my CPU and memory by helping me think about when and where exactly I’d spend them. Explicit mutability? How about explicit <em>cuteability</em>.</p> <p>And just the other day, when I asked you if we could go multi-threaded together, you didn’t just go along easily. You didn’t just hand me the keys and make me drive the whole process. You challenged me to use mutexes and reference counted pointers. You held my hand while I fumbled through it, and offered encouraging tips, with a lot of reminders to wrap things in safer containers before we went out into the cold, brutal multi-threaded world. Because of you, I’ll never have to feel the cold sting of corrupted memory again.</p> <p>My love, Rust, I don’t know if we can be together. You’re so new to this world and I’m not sure everyone will understand you. But I know I’ll do whatever I can to tell the world about your beauty and grace.</p> <p>Love Always, - Clint</p> <p><em>p.s. lets meet up again around spring break.</em></p> Mon, 23 Jan 2017 00:00:00 +0000 http://fewbar.com/2017/01/a-love-letter-to-rust/ http://fewbar.com/2017/01/a-love-letter-to-rust/ rust programming Rust OpenStack's nova-compute's border is porous - We need to build a wall <p>In the beginning there was Nova. It included volumes, networking, hypervisors, and scheduling. Since then, Nova components have either been replaced (nova-network with Neutron) or forklifted out and enhanced (Cinder). In so doing, interfaces were defined for how Nova would continue to make use of these now-external services, but nova-compute, the place where the proverbial rubber meets the road, was left inside Nova. This meant that agents for Cinder and Neutron had to interact with nova-compute through the high level message bus, despite being right on the same physical machine in many (but not all) cases. Likewise, some cases take advantage of that, and require operator cooperation in configuring for certain drivers.</p> <p>This has led to implementation details leaking all over the API’s that these services use to interact. Neutron and Nova do a sort of haphazard dance to plug ports in, and Cinder has drivers which require locking files on the local filesystem a certain way. These implementation details are leaking into public API’s because it turns out nova-compute is actually a shared service that should not belong to any of the three services, and which should define a more clear API which Nova, Cinder, and Neutron, should be able to use to access the physical resources of machines from an equal footing.</p> <p><a href="https://review.openstack.org/#/c/411527/">We’re starting a discussion in the OpenStack Architecture Working Group</a> around whether this is creating real problems, and how we can address it.</p> <p>What I think we need to do is build a wall around nova-compute, so we can accurately define what goes in or out, and what belongs specifically in nova-compute’s code base. That way we can accept the things that should live and work permanently inside its borders vs. what should come in through an API port of entry and declare its intentions there.</p> <p>But before we can build that wall, we need nova-compute to declare its independence from Nova. That may be as much a social challenge as a technical one. However, I think once we complete some analysis, and provide a path toward a more sustainable compute service, we’ll end up with a more efficient, less error-prone, more optimizable OpenStack.</p> <p>If you’re interested in this, I recommend you come to the next IRC meeting for the <a href="https://wiki.openstack.org/wiki/Meetings/Arch-WG">Architecture WG</a> , on January 12, 2017.</p> Fri, 16 Dec 2016 00:00:00 +0000 http://fewbar.com/2016/12/mr-nova-build-that-wall/ http://fewbar.com/2016/12/mr-nova-build-that-wall/ openstack architecture nova neutron cinder microservices OpenStack The real newcomers taking your job, since 1961, and still going <p>The recent Presidential and Congressional elections in the US shocked me to the core. I, like many of my closest friends, were certain that the American people would reject Donald Trump and the Republican party’s rhetoric.</p> <p>But the election happened, and since then, I’ve been trying to pay attention to the reasons. I’ve had many conversations with Trump voters and the old adage proves true: It’s the economy, stupid.</p> <p>But what’s wrong with the economy? For me, a tech worker in California, the last 8 years have been the best of my life. My pay has risen, and my job quality has gone up. This is true of all of my close associates as well. We simply haven’t seen this economy as anything but a boon. Of course, we’ve worked hard, and played our cards right. But the timing has never been better for workers in the tech sector.</p> <p>However, I’m not ignorant to the reasons behind this. Why is my salary going up, but those of factory workers in Ohio and Michigan going down?</p> <p>Donald Trump would have you believe it is our trade policies and the lack of a large wall on our southern border. The latter is an absolutely absurd idea on its face, but if you think longer, it’s really just a physical manifestation of the frustration of his supporters. They do see Mexican and Central American immigrants working, and they think “They took some US Citizen’s job.”</p> <p><a href="http://www.nytimes.com/2016/09/22/us/immigrants-arent-taking-americans-jobs-new-study-finds.html?_r=0">Economists disagree</a>. In fact, those immigrants who have illegally crossed the border tend to take service economy jobs that are low paying and without benefits. Because they live in fear of deportation, they tend not to exercise their labor rights, and as a result, tend to have a very low job quality. That’s not the kind of job that will “make America great again”. That’s the kind of job that comes and goes over time and leads mostly to a lower class lifestyle. Those who come legally tend to come on visas to fill labor shortages, despite rhetoric suggesting that somehow companies are abusing the H1B and other programs.</p> <p>But what about trade policies? Is it simply too easy to make stuff in China, Mexico, or Pakistan, and then import it back to the US?</p> <p>That is a part of it. Those places don’t have the same worker protections and have a lower cost of living, so one would expect that greedy corporations can make more money by reducing manufacturing costs there, and giving back a bit of the margin in shipping costs.</p> <p>But many things made in factories require customization. One difficulty in putting the product so far from the consumer, is that you can only make to stock. Make to order with a 10 week lead time is extremely haphazard and unpopular with most products. Many of the products still made in the US are of this kind.</p> <p>Also many products require skilled labor to produce. While a T-shirt can be sewed by relatively unskilled hands, and an iPhone can be assembled in stages that require minimal training, a wafer of microprocessors must be created in a high level clean room by automation that is overseen by well trained employees. Certain products are simply so American that it would make no sense to make it anywhere else, such as Wilson Footballs which will likely forever be made in the US unless China decides it wants more concussions and we end up with a Shanghai vs. Dallas super bowl in 2035.</p> <p>Also, don’t forget that these countries have now built their own middle class, and will soon run out of cheap labor as well. There are more emerging economies, but the point is, this isn’t a never ending chain, though it is one that doesn’t end soon.</p> <p>So I would suggest that while globalisation is an important factor, it’s been here for a long time, and no recent trade policies have really added to its impact. Those jobs aren’t coming back because our government wills them to. Tarrifs on Chinese imports will just result in China putting Tarrifs on US goods, and soon you’ll find that companies in the US are struggling to grow because the US economy, while large, is not in fact big enough to sustain itself. Whether you agree with the way in which NAFTA or the TPP were implemented, the economy will experience a huge upheaval without international free trade of some kind.</p> <p>So, globalization took jobs away decades ago. What’s going on now? Why haven’t manufacturing jobs grown with the rest of the economy?</p> <p>Well, I’m sorry to say, but in many cases, I took your job. Not me personally, but my industry has made automation and artificial intelligence a reality. And if you are being relied upon to make things even after globalization, get ready to have your job threatened again. <a href="http://www.historyofinformation.com/expanded.php?id=4071">In 1961 the first industrial robot, Unimate</a> took dangerous jobs away from GM factory workers, and since then plenty more robots have been added to the global manufacturing scene. This was an expensive robot to build and oporate, and so, by the 1980’s, we had already seen that generation of robots take as many jobs as were going to be taken.</p> <p>But lo, a new generation is upon us. <a href="http://science.howstuffworks.com/baxter-robot3.htm">Robots are on the market right now that cost under $30,000</a>, and will do general purpose tasks with enough flexibility to make things to order. This means that for a capital investment of a low end employee’s salary for a year, a factory can replace a human right here in the US. No more benefits, smaller parking lot, no <em>air conditioning</em> or <em>heating</em>, no cafeteria. And they’ll just need to employ a couple of engineers to keep the whole thing running.</p> <p>So what do we do for those displaced workers as automation happens at this level?</p> <p>Well believe it or not, there are <em>tons</em> of jobs that aren’t getting done because of labor <em>shortages</em>. These jobs aren’t just in computers. They are also civil engineering tasks, environmental engineering, and raw science. These are all things that will want you to have specific training, whether it’s a doctorate degree or some specific training in a particular field.</p> <p>But, you don’t have a college degree, you weren’t trained in one of these fields, and your job is threatened so you’re not going to be able to afford to get one.</p> <p>Well, folks, this is where the recent choice of a Republican Majority government is going to make this hard. The republicans are suggesting that if they let those at the top keep more of their money, they’ll build more factories, and invest in more businesses. But the reality is, that will just enable them to buy more automated infrastructure, and keep even more of their profits. They’ll do this 100% under the protection of the US constitution, and there won’t be anything you can do about it.</p> <p>I know, it sounds like marxism to some, but the answer is to <em>raise</em> taxes on those individuals living far above subsistance and even above comfortable middle class lives. We should then use that money to make college and advanced job training affordable, or even free to those who qualify by their academic achievements. That will get us even more engineers and scientists to actually build the world we want to live in, and also more system administrators, repair technicians, etc. to keep the world running. Most of these are safe jobs, and many of them can be done remotely, so you don’t have to move to a dirty, crowded city to take them.</p> <p>And make no mistake, what I’m arguing for is my own salary to be reduced. If there are more people out there who can do my job, I can expect to make less money. But I’d be happy to have less money, if it meant my kids get to live in a world where everybody has a chance to do what they want with their time, and we have the time to take care of the earth the way it should be done.</p> Mon, 21 Nov 2016 00:00:00 +0000 http://fewbar.com/2016/11/automation-will-change-your-job/ http://fewbar.com/2016/11/automation-will-change-your-job/ automation labor future america Life OpenStack needs an Architecture WG - Because we all can't be Gaudi <p><a href="https://en.wikipedia.org/wiki/Antoni_Gaud%C3%AD">Antoni Gaudi</a> designed one of the most beautiful things I have ever seen in my life, the <a href="https://en.wikipedia.org/wiki/Casa_Batll%C3%B3">Casa Batlló</a> in Barcelona. <img src="/images/Casa-Batllo-Barcelona.jpg" alt="Casa Batlló" /></p> <p>While touring the site, the audio tour guide explained multiple times that this entire site, from the basement to the roof, had no written detailed plans. Gaudi had to supervise every aspect of the construction, so that it was exactly the perfect masterpiece it is today. Gaudi is truly a legendary human being, and one of the greatest architectural minds in history. This means that he produced masterpieces, but it also means they can never be duplicated, are nearly impossible to improve, and are even quite difficult to maintain.</p> <p>And it is now, while I’m stuffed into “The giant metal tube” (Thanks <a href="https://twitter.com/robynbergeron">Robyn</a>), that I’m enjoying a rare moment of very clear thought after an <a href="https://www.openstack.org/summit/barcelona-2016/">OpenStack Summit</a> related to “God’s Architect”.</p> <p>The most important session that I attended, and led, was <a href="https://etherpad.openstack.org/p/BCN-architecture-wg">a fishbowl to discuss the Architecture Working Group</a>. Even with <a href="http://lists.openstack.org/pipermail/openstack-dev/2016-June/097657.html">mailing list threads</a>, <a href="https://review.openstack.org/#/c/335141/">review cycles</a>, <a href="https://etherpad.openstack.org/p/architecture-working-group">etherpads</a>, and <a href="https://wiki.openstack.org/wiki/Meetings/Arch-WG">meetings</a> behind us, it was clear from the discussion that there was some broad misunderstanding of what we were doing and why we want it to be a part of the community. We definitely used the time well and I think scraped away some of the boilerplate “we’re a team here we are” boring stuff and dug down to what it is we want to do.</p> <p>In a nutshell, we’re here to look at the Nova, Neutron, Oslo, et. al masterpieces, and write down the plans that were never created before. While there are change specs, and manuals for much of it, quite a bit has no binding theory of operation. As a result, there is quite a strong cargo cult inside OpenStack, leading to forward progress without understanding. This creates an OpenStack where there are more people writing code than can understand code, which complicates every aspect of developing and even operating it.</p> <p>We want to make sure that OpenStack can continue moving forward, and so, we need to write down how things work now, record the current theory of operation, and then collaborate on improvements to those theories and the actual implementations.</p> <p>Without the Architecture Working Group, I’m sure OpenStack will remain valuable and even be maintainable. However, I’d like to see it continue to evolve and thrive even faster, and I’m proud to be working with people to try and provide a safe place to make that happen.</p> Sun, 30 Oct 2016 00:00:00 +0000 http://fewbar.com/2016/10/openstack-architecture-wg-because-we-all-arent-gaudi/ http://fewbar.com/2016/10/openstack-architecture-wg-because-we-all-arent-gaudi/ communication openstack summit barcelona architecture Life OpenStack Open Source People Communicate <p>As I sit here preparing to cross the Atlantic, I am pondering on what we’ll do <a href="https://www.openstack.org/summit/barcelona-2016/">in barcelona</a>.</p> <p>This will be my… (stopping to count on fingers.. running out of fingers…) 11th Summit. Back in the Essex days, I was communicating about <a href="https://jujucharms.com/">Juju</a> whilst working for <a href="http://www.canonical.com">Canonical</a>. It was a fantastic experience to see some of the same communication methods we had used at <a href="http://uds.ubuntu.com/">Ubuntu Developer Summits</a>, and new ones, coming together to form this massive community.</p> <p>This will be the last summit where we ask <em>every</em> technical contributor to join the fray. An evolution of the process is under way, which has been called the <a href="http://www.openstack.org/ptg">Project Teams Gathering</a>. So this may be the last time we do it the way it has always been done, with technical contributors mingling with business folks at the OpenStack Conference. There are some <a href="http://lists.openstack.org/pipermail/openstack-dev/2016-October/105524.html">concerns about this</a>, some even <a href="http://lists.openstack.org/pipermail/openstack-dev/2016-October/105260.html">expressed by me</a>. But I trust those who have been formulating this plan to be dilligent at iterating on it to improve our throughput.</p> <p>And the reason I trust them is that I have seen one constant throughout successful Open Source contributors. We all communicate. I take it for granted how well we actually communicate, given how distributed we are, and how few shared objectives we have. But I think what separates a pet project on github from an Ansible or OpenStack sized project is contributors who communicate early, and communicate often.</p> <p>So, I am very much looking forward to this upcoming summit. I expect that we will all do our best to communicate by listening, recording, reflecting, and adding our voics. But I am also quite excited to see how the new format works out not only at the PTG in February, but also the <a href="https://www.openstack.org/summit/boston-2017/">next Summit in Boston</a>.</p> Sat, 22 Oct 2016 00:00:00 +0000 http://fewbar.com/2016/10/opensource-people-communicate/ http://fewbar.com/2016/10/opensource-people-communicate/ communication openstack summit barcelona Life OpenStack Goodbye Wordpress+Drizzle+AWS, Hello Github Pages <p>For the past 6 years, my website has been running on WordPress, with <a href="http://drizzle.org">Drizzle</a> as the backend database. As far as I know, I’m the only person who was insane enough to attempt this. It required patching <del>Drizzle</del>WordPress to support valid dates (MySQL allows 0000-00-00 in some cases, wordpress uses this heavily), and that patching has just gotten too difficult to keep up with.<sub><a href="#wordpress-drizzle-plugin">1</a></sub></p> <p>This also included running on a <em>t1.micro</em> at Amazon. This helped me learn how AWS works, and that was useful, but at this point, I think the $17/month or so that I’m paying for it isn’t really worth it, and I don’t login enough to learn much anymore.</p> <p>So I’ve decided to make the jump to GitHub pages. I like the idea of having a nice static page for my blog. Comments can happen on social media.</p> <p>Thanks GitHub for giving us coders a place to drop a static web page.</p> <p>And, so long Drizzle, and thanks for all the fish.</p> <p><a name="wordpress-drizzle-plugin">Edit 1, October 10, 09:27 PDT</a>: <em>I also wrote a <a href="https://launchpad.net/wordpress-drizzle">wordpress drizzle plugin</a> to handle the more straight forward elements of Drizzle’s more strict SQL dialect, but that never gave me any trouble once written. The issue with 0000-00-00 is that it is used all over WordPress as a string literal and there’s no single place to change that.</em></p> Sun, 09 Oct 2016 00:00:00 +0000 http://fewbar.com/2016/10/goodbye-wordpress-drizzle-hello-github-pages/ http://fewbar.com/2016/10/goodbye-wordpress-drizzle-hello-github-pages/ jekyll wordpress drizzle aws ec2 github Life