I like cleaning git history, in feature branches, at least. The goal is a set of logical commits without other cruft, that can be cleanly merged into master. This can be easily achieved with git rebase and force pushing to the feature branch on GitHub.

Today I had a little accident and found myself in this situation:

  • I accidentally ran git push origin -f instead of my usual git push origin -f branchname or git push origin -f HEAD
  • This meant that I not only overwrote the branch I wanted to update, but also by accident a feature branch (called httpRefactor in this case) to which a colleague had been force pushing various improvements which I did not have on my computer. And my colleague is on the other side of the world so I didn’t want to wait until he wakes up. (if you can talk to someone who has the commits just have him/her re-force-push, that’s quite a bit easier than this) It looked something like so:
$ git push origin -f   <here was the force push that succeeded as desired> + 92a817d...065bf68 httpRefactor -> httpRefactor (forced update) 

Oops! So I wanted to reset the branch on GitHub to what it should be, and also it would be nice to update the local copy on my computer while we’re at it. Note that the commit (or rather the abbreviated hash) on the left refers to the commit that was the latest version in GitHub, i.e. the one I did not have on my computer. A little strange if you’re to accustomed to git diff and git log output showing hashes you have in your local repository.

Normally in a git repository, the objects dangle around until git gc is run, which clears any commits except those reachable by any branches or tags. I figured the commit is probably still in the GitHub repo (either cause it’s dangling, or perhaps there’s a reference to it that’s not public such as a remote branch), I just need a way to attach a regular branch to it (either on GitHub, or fetch it somehow to my computer, attach the branch there and re-force-push), so step one is finding it on GitHub.

The first obstacle is that GitHub wouldn’t recognize this abbreviated hash anymore: going to https://github.com/raintank/metrictank/commit/92a817d resulted in a 404 commit not found.

Now, we use CircleCI, so I could see what had been the full commit hash in the CI build log. Once I had it, I could see that https://github.com/raintank/metrictank/commit/92a817d2ba0b38d3f18b19457f5fe0a706c77370 showed it. An alternative way of opening a view of the dangling commit we need, is using the reflog syntax. Git reflog is a pretty sweet tool that often comes in handy when you made a bit too much of a mess on your local repository, but also on GitHub it works: if you navigate to https://github.com/raintank/metrictank/tree/httpRefactor@{1} you will be presented with the commit that the branch head was at before the last change, i.e. the missing commit, 92a817d in my case.

Then follows the problem of re-attaching a branch to it. Running on my laptop git fetch --all doesn’t seem to fetch dangling objects, so I couldn’t bring the object in.

Then I tried to create a tag for the non-existant object. I figured, the tag may not reference an object in my repo, but it will on GitHub, so if only I can create the tag, manually if needed (it seems to be just a file containing a commit hash), and push it, I should be good. So:

~/g/s/g/r/metrictank ❯❯❯ git tag recover 92a817d2ba0b38d3f18b19457f5fe0a706c77370 fatal: cannot update ref 'refs/tags/recover': trying to write ref 'refs/tags/recover' with nonexistent object 92a817d2ba0b38d3f18b19457f5fe0a706c77370 ~/g/s/g/r/metrictank ❯❯❯ echo 92a817d2ba0b38d3f18b19457f5fe0a706c77370 > .git/refs/tags/recover ~/g/s/g/r/metrictank ❯❯❯ git push origin --tags error: refs/tags/recover does not point to a valid object! Everything up-to-date 

So this approach won’t work. I can create the tag, but not push it, even though the object exists on the remote.

So I was looking for a way to attach a tag or branch to the commit on GitHub, and then I found a way. While having the view of the needed commit open, click the branch dropdown, which you typically use to switch the view to another branch or tag. If you type any word in there that does not match any existing branch, it will let you create a branch with that name. So I created recover.

From then on, it’s easy.. on my computer I went into httpRefactor, backed my version up as httpRefactor-old (so I could diff against my colleague’s recent work), deleted httpRefactor, and set it to the same commit as what origin/recover is pointing to, pushed it out again, and removed the recover branch on GitHub:

~/g/s/g/r/metrictank ❯❯❯ git fetch --all (...) ~/g/s/g/r/metrictank ❯❯❯ git checkout httpRefactor ~/g/s/g/r/metrictank ❯❯❯ git checkout -b httpRefactor-old Switched to a new branch 'httpRefactor-old' ~/g/s/g/r/metrictank ❯❯❯ git branch -D httpRefactor Deleted branch httpRefactor (was 065bf68). ~/g/s/g/r/metrictank ❯❯❯ git checkout recover HEAD is now at 92a817d... include response text in error message ~/g/s/g/r/metrictank ❯❯❯ git checkout -b httpRefactor Switched to a new branch 'httpRefactor' ~/g/s/g/r/metrictank ❯❯❯ git push -f origin httpRefactor Total 0 (delta 0), reused 0 (delta 0) To github.com:raintank/metrictank.git  + 065bf68...92a817d httpRefactor -> httpRefactor (forced update) ~/g/s/g/r/metrictank ❯❯❯ git push origin :recover                                                                                                                                            ⏎ To github.com:raintank/metrictank.git  - [deleted]         recover 

And that was that… If you’re ever in this situation and you don’t have anyone who can do the force push again, this should help you out.

Original Article

How I came to work on SaltStack

I was working at Rackspace doing Linux support in the Hybrid segment. I did a lot of work with supporting Rackconnect v2/v3 and Rackspace Public cloud as well as the dedicated part of the house. I started down the road to server automation and orchestration the way I think a lot of people do. At some point, I started to think… there has to be a better way.

I began with learning chef. Rackspace's devops offering had just begun and there was a lot of people using chef and poo pooing puppet in places that I was looking. I had never used ruby before, so I did some ruby practice on codecademy and I learned the basics of different blocks in ruby. I then setup wordpress. As can be seen from that repository, it has been a very long time since I did chef. I played with chef for about 6 months, and then I decided to try using Ansible and see what that was all about. I liked the idea of pushing instead of pulling and the easy deployment method was nice. But after about a month of using Ansible, Joseph Hall came to Rackspace right after the first SaltConf in 2014 and gave a 3 day class on salt. And I was in love. I loved the extensibility of salt, the reactor, the api, salt-cloud being built in. It was all just perfect for me. And by the end of the second day of the class, I had submitted my first pull request to saltstack.

My favorite thing I think about contributing to salt is how open it is to the community and how hard we all try to be welcoming to anyone new. We kind of have a No Jerks Allowed Rule, and try to be as polite and welcoming as possible.

Anyway, lets get started. This is going to be just as much as I can think of on how to go about contributing to salt.

Getting setup

I probably run my testing setup a little different than everyone else. Anyway you can get salt running to do testing is good. If it works for you, do it.

I create a server in VMWare Fusion using CentOS. Then I install epel-release and then python-pip and I do a pip install -e git://github.com/gtmanfred/salt.git@<branch>#egg=salt. This will give me everything I need to install to get salt running. Since it is also -e the git install is editable, so the changes take effect immediately and I can edit right there in ./src/salt. From here for all my changes, I can just commit right there to save for later.

Recently I have been trying to switch to using atom as my editor. I really like it. What I have been using is the remote ftp plugin. This allows for the remote directory to be setup to ~/src/salt, and then I just have that in the .ftpconfig and once connected, there is a second project window that shows the remote ftp location with all the files, and I can treat it as if it was the local file. Then once all the files are done, I can sync from the remote down to the local and make my pull request.

Either way, get a working environment going.

Here is the salt document on getting started with the development. You can ignore parts in there about M2Crypto and swig. There are no currently supported salt versions that use M2Crypto.

Another thing you could do if you were so inclined, would be to copy the module you are going to be modifying to /srv/salt/_modules or whatever dynamic directory where it belongs. You will then need to run salt-call saltutil.sync_all to sync modules to the minion or salt-run saltutil.sync_all for the master.

Writing a … template

The first thing that I do any time I make a new file for a salt module is to add the following template.

# -*- coding: utf-8 -*- ''' :depends: none ''' from __future__ import absolute_import  # Import python libraries  # Import Salt libraries   def __virtual__():     return True 

Here are the things that are going on above.

  1. We require the # -*- coding: utf-8 -*- at the top of all files.
  2. Each file requires a docstring at the to to list any depends and then basic configuration for usage such as s3 credentials. It is also good to use the :depends: key if there are any required packages that need to be installed for the module to be used.
  3. I pretty much always import absolute_import. This is just useful to have and will cause less weird issues later. Plus it is the default behavior in python3, so there is nothing bad that could come from it.
  4. Then We have the two import options. Anything that gets import from salt, like salt.utils gets put under the Import Salt libraries, and all other imports get put under python libraries.
  5. Then we have the __virtual__ functions which we will go over later when we talk about the anatomy of a module.

Execution Modules

Now lets move to writing a module. I am going to demo with a contrived example of a redis module, and then go over every line.

Here is a simplified salt/modules/redismod.py file.

# -*- coding: utf-8 -*- ''' Redis module for interactive with basic redis commands.  .. versionadded:: Nitrogen  :depends: redis  Example configuration  .. code-block:: yaml     redis:       host:       port: 6379       database: 0       password: None '''  from __future__ import absolute_import  # Import python libraries try:     import redis     HAS_REDIS = True except ImportError:     HAS_REDIS = False  __virtualname__ = 'redis'   def __virtual__():     '''     Only load this module if redis python module is installed     '''     if HAS_REDIS:         return __virtualname__     return (False, 'The redis execution module failed to load: redis python module is not available')   def _connect(host=None, port=None, database=None, password=None):     '''     Return redis client instance     '''     if not host:         host = __salt__['config.option']('redis.host')     if not port:         port = __salt__['config.option']('redis.port')     if not database:         database = __salt__['config.option']('redis.database')     if not password:         password = __salt__['config.option']('redis.password')     name = '_'.join([host, port, database, password])     if name not in __context__:         __context__[name] = redis.StrictRedis(host, port, database, password)     return __context__[name]   def get(key, host=None, port=None, database=None, password=None):     '''     Get Redis key value      CLI Example:      .. code-block:: bash          salt '*' redis.get foo         salt '*' redis.get bar host= port=21345 database=1     '''     server = _connect(host, port, database, password)     return server.get(key)   def set(key, value, host=None, port=None, database=None, password=None):     '''     Set Redis key value      CLI Example:      .. code-block:: bash          salt '*' redis.set foo bar         salt '*' redis.set spam eggs host= port=21345 database=1     '''     server = _connect(host, port, database, password)     return server.set(key, value)   def delete(key, host=None, port=None, database=None, password=None):     '''     Delete Redis key value      CLI Example:      .. code-block:: bash          salt '*' redis.delete foo bar         salt '*' redis.delete spam host= port=21345 database=1     '''     server = _connect(host, port, database, password)     return server.delete(key, value) 

There, that is a moderately simple example where we can talk about every thing going on.

  1. You will notice the coding line at the top like in the template
  2. Next we have the docstring.
    • There is a brief description
    • a versionadded string. Please include these if you make new modules, so that when referencing back we can see when the module was added. Also, if it is an untagged release, use the codename, otherwise use the point release where it was added. We update the code names on all versonadded added and versionchanged strings when we tag them with a release date.
    • A depends string, to let the user know that the redis python module is required.
    • An example configuration if you one is possible to be used.
  3. Then we have the imports. We catch the import error on redis, and set HAS_REDIS as False if it can't be imported so that we can reference it in the __virtual__ function and know if the module should be available or not.
  4. __virtualname__ is used to change the name the module should be loaded under. If __virtualname__ isn't set and returned by the __virtual__ function then the module would be called using redismod.set.
  5. The __virtual__ function is used to decide if the module can be used or not.
    • If it can be used and it has a __virtualname__ variable, return that variable. Otherwise if it is to be named after the name of the file, just return True.
    • If this function can't be used, return a two entry tuple where the first index is False and the second is a string with the reason it could not be loaded so that the user does not have to go code diving.
  6. Now the connect function.
    • If you include something like this, please be sure to also include the ability to connect to the module by passing arguments from the command-line and not only having to modify configuration files.
    • It is important to note, while python allows for any "private" functions to be importable and used, salt does not. The _connect function is not usable from the command-line, or from the __salt__ dictionary
    • There are a lot of includes that salt provides into different portions of salt. These are usually called dunder dictionaries.
    • Using config.get lets the configuration be put in the minion config, grains, or pillars. There is a heirarchy.
    • The lastly we have __context__. This is a really usefull for connections, because you only have to setup the connection one time, and then you can continually just return it and use it every time the module is used, instead of having to reinitialize the connection.
  7. Lastly we have the functions that are available.
    • You want a doc string that has a description, then a code example. The code example is required. This is the doc string that gets showed when you run salt-call sys.doc <module.function>
    • Then just all the logic.
    • If you have stuff that is being used a lot in multiple functions. Maybe split it out into another function for everything else to use, and if that function shouldn't be used from the command-line, be sure to prefix it with an underscore.

And that is your basic anatomy of a salt execution module.

State Modules

Now lets move on to writing state modules. State modules are where all the idempotence, configuration, and statefullness comes in. I am going to use the above module in order to make sure that certain keys are present or absent in the redis server.

Here is my simplified salt/states/redismod.py

# -*- coding: utf-8 -*- ''' Management of  Redis servers ============================  .. versionadded:: Nitrogen  :depends: redis :configuration: see :py:mod:`salt.modules.redis` for setup instructions  Example States  .. code-block:: yaml      set redis key:       redis.present:         - name: key         - value: value      set redis key with host args:       redis.absent:         - name: key         - host:         - port: 1234         - database: 3         - password: somepass '''  from __future__ import absolute_import  __virtualname__ = 'redis'   def __virtual__():     if 'redis.set' in __salt__:         return __virtualname__     return (False, 'The redis execution module failed to load: redis python module is not available')   def present(name, value, host=None, port=None, database=None, password=None):     '''     Ensure key and value pair exists      name         Key to ensure it exists      value         Value the key should be set to      host         Host to use for connection      port         Port to use for connection      database         Database key should be in      password         Password to use for connection     '''     ret = {'name': name,            'changes': {},            'result': False,            'comment': 'Failed to set key {key} to value {value}'.format(key=name, value=value)}      connection = {'host': host, 'port': port, 'database': database, 'password': password}     current = __salt__['redis.get'](name, **connection)     if current == value:         ret['result'] = True         ret['comment'] = 'Key {key} is already value correct'.format(key=name)         return ret      if __opts__['test'] is True:         ret['result'] = None         ret['changes'] = {             'old': {name: current},             'new': {name: value},         }         ret['pchanges'] = ret['changes']         ret['comment'] = 'Key {key} will be updated.'.format(key=name)         return ret      __salt__['redis.set'](name, value, **connection)      current, old = __salt__['redis.get'](name, **connection), current      if current == value:         ret['result'] = True         ret['comment'] = 'Key {key} was updated.'.format(key=name)         ret['changes'] = {             'old': {name: old},             'new': {name: current},         }         return ret      return ret   def absent(name, host=None, port=None, database=None, password=None):     '''     Ensure key is not set.      name         Key to ensure it does not exist      host         Host to use for connection      port         Port to use for connection      database         Database key should be in      password         Password to use for connection     '''     ret = {'name': name,            'changes': {},            'result': False,            'comment': 'Failed to delete key {key}'.format(key=name, value=value)}      connection = {'host': host, 'port': port, 'database': database, 'password': password}     current = __salt__['redis.get'](name, **connection)     if current is None:         ret['result'] = True         ret['comment'] = 'Key {key} is already absent'.format(key=name)         return ret      if __opts__['test'] is True:         ret['result'] = None         ret['changes'] = {             'old': {name: current},             'new': {name: None},         }         ret['pchanges'] = ret['changes']         ret['comment'] = 'Key {key} will be deleted.'.format(key=name)         return ret      __salt__['redis.delete'](name, value, **connection)      current, old = __salt__['redis.get'](name, **connection), current      if current is None:         ret['result'] = True         ret['comment'] = 'Key {key} was deleted.'.format(key=name)         ret['changes'] = {             'old': {name: old},             'new': {name: current},         }         return ret      return ret 

And lets review, this will mostly be the same as the execution module with one major difference that we use the execution module in the state.

  1. Same coding line
  2. Include depends and configuration information. If the configuration is stored with the module, you can link to the module using py mod link like i did above.
  3. Include any complex information about the state in the top doc string. It is important to also include an example state up here. But if you have more complicated states, it would be good to include examples in each function to show how they should be used.
  4. Changes to see if the redis.set_key module is loaded in the __salt__ dunder. If it is not loaded, we know we can't do any work in this state, and we should return False.
  5. Now we get to writing a state
    • We have a return dictionary and it always includes the following:
      • name: the string name of the state
      • changes: a dictionary of things that were or could be changed
      • pchanges: a dictionary of potential changes that is used if test=True is passed
      • result: True, False, None
      • comment: a string describing what happened in the state.
    • I always start with a default ret variable that describes what happens when the state fails, so I can just return it on failure at the end.
    • Then the first thing to do is check if the state is already as it should be. In the case of present we check if the key is already set to the desired value. For absent we check if the key is set to None which indicates that it is a null value which is what redis considers deleted. If it is already set, we set results to True, and set the comment to reflect that it is True, and return the dictionary.
    • There is also a testing run portion of the state we should check if __opts__['test'] is True which would signify that test=True was passed on the command-line. Then we should only set changes to reflect what is going to change, and return with result set to None to signify that it should be a successful change, but changes are required.
    • Last, we make the change, then check if the change took affect. If it did, result should be True, and we return with the correct stuff in changes and an updated comment.
    • Otherwise we return with our False dictionary we setup at the beginning.

One other thing to remember about is the mod_init and mod_watch functions. These can be used to change the way the module behaves when initially called. The mod_watch is the part that is actually called when you watch or listen to a state in your requisites.

Running pylint on your changes

We run pylint on every change, so it is a good thing to know about because you can start adjusting yourself to write more inline with what pylint wants. The only big thing that I will say you should know is that our line limit is actually 119 instead of 80.

Now, to run pylint, you are going to need a few things. You should install all the dependencies for salt, and you should install the stuff for dev_python27.txt. But then you also need to update to the newest version of SaltPylint and SaltTesting.

pip install -r requirements/dev_python27.txt -r requirements/raet.txt -r requirements/zeromq.txt pip install --upgrade SaltPyLint SaltTesting 

And then you can run pylint on your code before submitting a PR.

pylint --rcfile=.testing.pylintrc --disable=W1307,E1322 salt/ 

Getting the docs working

Unfortunately, if you write a new module, sphinx is unable to discover than and just import the docstrings for you, so we will need to create a few files to reference the ones above.

First, we autoload the docstrings for the actual doc file.


================== salt.modules.redis ==================  .. automodule:: salt.modules.redismod     :members: 


================== salt.states.redis ==================  .. automodule:: salt.states.redismod     :members: 

Then they will get compiled, then we have to add the following references to the correct index files and just add redis to doc/ref/modules/all/index.rst and doc/ref/states/all/index.rst so that they will be visible in the index pages for all execution modules and all state modules.

Creating a pull request

We love pull requests. Just look at the github repository, there have been 23,296 pull request, almost all of which I would bet are accepted, and almost none have been closed with saying we can't accept that. There have been 1642 contributors as of this writing!

Here are things to remember when opening a pull request.

  • If it is a new feature, add it to develop. We include a very easy way to take the changes and import them into a running system, we don't want to break other peoples deploys by adding new features into point releases.
  • If it is a bug fix, go back to the latest supported release, and add it there. Right now, unless it is a CVE change, the oldest supported release for commits is 2016.3, everything else is in phase 3 or extended life support. (We are working very hard to get Carbon out the door right now.)
  • Please fill out the form! As much of the form in the pull request that makes sense, provide us with as much information about the change you are making. I am bad about it to, sometimes I just think Mike Place or Nicole Thomas are mind readers and can just get what I mean, but the definitely can't. So let them know in detail what you are actually changing.
  • I will cover this in a later part, but please! provide unittests if at all possible! (though not required)

End of Part 1

This was a lot longer than I thought it was going to be. I am going to try and continue next week and talk about beacons and engines and some specifics to look for there. Hopefully this will be helpful for some reason. It basically just became a link dump to a lot of useful information in our documentation since it can be sometimes hard to find.

Leave any comments if you have any thing that you would like to be covered

Original Article

ttf-dejavu 2.37 will change the way fontconfig configuration is installed. In previous versions the configuration was symlinked from post_install/post_upgrade, the new version will place the files inside the package like it is done in fontconfig now.

For more information about this change: https://bugs.archlinux.org/task/32312

To upgrade to ttf-dejavu 2.37 it's recommended to upgrade the package on its own: pacman -S --force ttf-dejavu

Original Article

Getting Started

This requires at least the 2016.11.0 release of saltstack.


Then just yum install -y salt-minion

Modules from Nitrogen

You will also need a few new modules and a new engine that will be in the Nitrogen release.

In /srv/salt/_modules you will need the following two modules: new hashutil module and new event module

And the thing that makes it all possible the new webhook engine needs to be put in /srv/salt/_engines: Webhook engine

Once these are all in place, run salt-call saltutil.sync_all to make sure they get put in the extmods directory and are usable.


My configurations are located here but I will highlight some of the specifics here.

First, we want to make the minion a masterless minion, and to never query the master for anything. So to /etc/salt/minion.d/local.conf add

local: True file_client: local master_type: disable 

Any one of the settings could be used, I like to use all three just to make certain.

Second, we need to setup the ssl keys so that we can have a secure connection. To do this, you can run the following command to create a generic ssl certificate, if you want to have verification in there, you can make a nice one for the domain and everything, but we just want to have the traffice encrypted, so use salt-call --local tls.create_self_signed_cert. Now that we have an ssl certificate pair, we can setup the webhook engine. I put the following in /etc/salt/minion.d/engines.conf.

engines:   - webhook:       address: None       port: 5000       ssl_crt: /etc/pki/tls/certs/localhost.crt       ssl_key: /etc/pki/tls/certs/localhost.key   - reactor: {}  reactor:   - 'salt/engines/hook/update/blog/gtmanfred.com':     - salt://reactor/blog.gtmanfred.com.sls 

This will enable the webhook on all ips on port 5000 with the listed ssl certificate. It will also enable the reactor to be able to act upon the one tag in the event stream, which we will get to later.

Now we need to setup the github webhook so we can see the events in the event stream. Go to your blog's github repository, and go to the settings. Then select webhooks, and create a new one.

Configure Github

For the "Payload URL" you are going to set https and then the ip address/domain and port to access, followed by the uri which should match what your are going to trigger on for the reactor. As you can see in the picture above, I have /update/blog/gtmanfred.com as my URI, and this matches what follows the prefix salt/engines/hook in the reactor config above. Be sure to add a secret! And don't forget it! We will be verifying that in a later step. Then customize which events you would like to trigger on and save. I am going to rebuild the blog on each push, so I am only sending push events.


Before you forget that secret key, we should save it somewhere. I use sdb in salt so that I can save my states and reactors in public github, but hide the secret key in sdb. Create /etc/salt/minion.d/sdb.conf with the following.

secrets:   driver: sqlite3   database: /var/lib/sdb.sqlite   table: sdb   create_table: True 

Now run salt-call --local sdb.set sdb://secrets/github_secret <secretkey> to save the key.

Now the last step, creating the reactor file in the salt fileserver. Mine is in /srv/salt/reactor/blog.gtmanfred.com.sls, so I just have to reference it with salt://reactor/blog.gtmanfred.com.sls (and can also use the reactor files from gitfs).

{%- if salt.hashutil.github_signature(data['body'], salt.sdb.get('sdb://secrets/github_secret'), data['headers']['X-Hub-Signature']) %} highstate_run:   caller.state.apply:     - args: [] {%- endif %} 

Lets walk through this. We are going to take the data['body'] from the github post, and our secret, and the X-Hub-Signature and run it through the github_signature function to verify if the signature is the result of signing the body with your secret key. If True comes back, we can be sure this came from github, and then our minion runs a highstate on itself. If it is False, nothing is rendered and no event is run.

Original Article

At work we use Git with https auth, which sadly means I can’t use ssh keys. Since I don’t want to enter my password every time I pull or push changes to the server, I wanted to use my password manager to handle this for me. Git has pluggable credential helper support for gnome-keyring and netrc, adding pass support turned to be quite easy. Create a script called “pass-git.sh” and put the following contents in it where ‘gitpassword’ is your password entry.

echo "password="$(pass show gitpassword)

In your git directory which uses https auth execute the following command to setup the script as a credential helper.

git config credential.helper ~/bin/pass-git.sh

Voila, that’s all that’s it.

Git credential helper pass was originally published by Jelle van der Waa at Jelly's Blog on October 14, 2016.

Original Article