Thoughts on dotfiles

I've maintained a dotfiles repository since 2011 (at least that's when I initially put it into git), and haven't thought much about it since then. I wrote a basic "dotfiles installation" script back in 2013 and have been using and lightly modifying that ever since. My primary use case was just syncing dotfiles between my personal and corporate Macbooks, so it didn't need much flexibility. However, the Makefile wasn't trivial and doing simple things like adding new files required knowing the magic incantation (a combination of undocumented convention and GNU Make functions).[1]

I recently reevaluated my strategy since I was setting up a new workstation for the first time in a while, and it was a Linux box, which meant there would be configuration that wasn't compatible with my Macbook. I decided that I needed to improve the situation, and came up with a list of personal priorities.

  1. Minimize the required knowledge. I only go near my dotfiles manager once in a while, and so it's very easy to forget any customizations or processes I need to file. The ideal solution should be very easy to reason about and require few if any specialized commands or configuration files.
  2. Minimize the likelihood of failure. I might be configuring a MacOS or Linux machine of any distribution, and some software packages might not be available on the machine during installation. Installing my dotfiles should always work, regardless of the existing configuration of the target machine.
  3. Minimize the friction of updating. Sharing a modification from one machine to the rest should be as painless as possible. In general, I already have a working configuration file on one of my machines, and I just want that file, verbatim, on another machine.

Of course, everybody and their brother has their own preferred solution for managing their dotfiles, so naturally there's a plethora of prior art to choose from. I looked through all of the ones on Github Does Dotfiles, but (unsurprisingly) found that none were quite what I was looking for. When I was reviewing the list, I found a few factors that caused me to rule out most of the projects immediately:

Projects that bill themselves as "dotfile package managers". These all violate priority 1 for me by introducing a new set of commands/configuration to learn so I can manage my "dotfiles packages". Beyond that, the functionality to "update" my dotfiles to a version that someone else wrote that I have never reviewed is a huge violation of all of my priorities. There's no way I would introduce this level of complexity into my workflow.

Projects that depend on (node|python|ruby|gcc|apt). These violate priority 2 for me. Not all of my systems have a these language runtimes installed, and I don't want to install one to create some symlinks. Many scripting languages are also moving targets, requiring specific versions of the language runtimes in order to function properly, which might not be the same version installed by the OS package manager. The best projects are those that can be run directly from the git repository or have a single statically-linked binary distributed with the project.

Projects that interact with git. I use git every day and have a git workflow that I use every day. Dotfile managers that attempt to manage my dotfiles git repository (or "wrap git") end up putting me in situations outside of my "git happy path", and inadvertently cause more confusion than simplicity.

Projects that don't clean up unused configuration files. This one is minor, but most dotfiles managers are good at getting files onto your system, but bad at getting them off, leaving dangling symlinks in their wake. When I remove the configuration from my git repository, I want it to be removed from my home directory as well.

Building a my own solution

Like so many before me, I decided to write my own solution: dfm. The goals for dfm line up with the three priorities from before:

  1. Simple. Conceptually, dfm should be as simple as cp, with no new concepts. When using it, there should be a small number of commands and few, if any, options for those commands. Everything should be clearly documented, but reading the documentation should not be necessary for daily use.
  2. Reliable. The program should be able to run on any freshly-configured Linux or MacOS system without any additional software installation. It should also work when running against an already-configured system, and not do anything silly like destroy the existing configuration.
  3. Easy. The program should have one command (or fewer!) to set up a new machine; one command to update the machine; and one command to update the repository from the machine.

I think dfm accomplished these goals well. In particular, I'm happy with the simplicity of the workflow:

  • dfm add will copy a file from your home directory into your git repository, then replace it with a symlink to the git repository. No configuration files need to be edited.
  • dfm link will sync your git repository to your home directory for both new files and removed files. It doesn't overwrite existing files by default, but using the industry-standard -f flag will force it.
  • There's a command to stop using dfm. I don't recall a single other dotfiles manager giving you a way out.

I did violate priority 1 a bit in dfm, by adding support for "multiple repositories". I decided to do this because it grants you quite a bit of added flexibility (configurations for different OSes/machine roles, "dotfiles packages" through submodules, local-only configurations through .gitignore, etc.), at the cost of a required option during initial configuration.

I stuck to my priorities as much as possible for the rest. Here are a few features that dfm does not have, because they would violate the priorities:

  • Templated files. Instead, I can write a script that runs dfm link and then generates additional files.
  • Script running. Instead, I can write a script that runs dfm link and then any other scripts I need.
  • Secret management. Instead, I can use an external tool to store the secrets and combine it with a script that runs dfm link and then installs the secrets.
  • Syncing. Instead, I can do this using git, like I do every day for all of my projects.

As I said before, everybody has their own custom solution for this, and I'm sure that for many of the solutions, the extra complexity they added is worthwhile to them. I wanted to build something that was deliberately simple and worked everywhere I am likely to find myself, and I think dfm does a good job of it.

My advice

Obviously, if your priorities are similar to mine, dfm is a project you should look into.

But really, whatever you decide, document it for yourself! Your repository README should at least cover setting up a new machine, adding a new config file, and syncing changes on an existing machine.

  1. To get an idea for what I was dealing with, here is the last version of that script before I deleted it.

Dev Diaries - July 21, 2018

This is a short update about all of the hobby projects I have in flight. I'm mostly writing this as a way for me to keep track of what's next on each of them, so I can decide what to work on next. 😀


The core language is working well enough that I'm ready to start working on my first real Glish application to help design the other language features. The first application is a board game engine, where Glish is the repository of game rules and a web application is responsible for collecting the actions from the players and showing everything.

Since I'm not totally sure what a "board game engine" looks like, I want to code a basic implementation in JavaScript, just to help me figure out what the primtives are. I am building a command-line Mancala program as an initial trial, and then plan to add some other simple game built on the JS version of the engine to "kick the tires" of the model I've designed. Then I'll implement that model in Glish, and finally build a graphical frontend.

The last step is a bit tricky for me. It turns out I'm really only good at building web applications of the traditional "application" kind, like with buttons and forms and stuff. I haven't done any work with any modern tools to handle things like animations, physics, etc. So I had to take a pretty big detour here to learn about these. I compared a few and settled on Phaser, a JavaScript library designed to make 2D games. After doing a tutorial, I think I have a good enough grasp of Phaser to start building the front end. I wanted to do this before I got too far into building a particular implementation in case there were any unforeseen difficulties.

Next steps on this project:

  1. Finish building the command-line version of the engine and make sure the designed model makes sense.
  2. Build a command-line version of the engine in Glish.
  3. Build the graphical frontend.

Chess 2

I've been working on a Chess 2 game for several years now. There's an official Steam game, but it doesn't offer mobile play and the AI that comes included with the game is woefully bad (it appears to be a stock Chess AI dropped in to the rules of Chess 2). My goal from the start has been to build a better Chess 2 AI and learn about machine learning in the process.

I started this project by building a Python implementation of the Chess 2 engine. In doing this I discovered many difficult edge cases in the rules of Chess 2 and built an engine that could handle all of these. I chose Python because I was planning on developing the AI using TensorFlow, for which Python is the preferred language.

After I did that I built an online app to play games. It's built with React, GraphQL, AWS Lambda, and AWS DynamoDB. The biggest thing I learned in doing this is that my entire API should have been a single Lambda function. I used a different Lambda function for each endpoint which means that deploys are more difficult and the app takes longer to "boot up". The web app frontend isn't very good, and I haven't touched it in probably about a year.

In order to train an AI, you typically need a huge data set. One doesn't exist for Chess2 games, so I looked for "tabula rasa" training methods. The widely-known AlphaGo Zero was one implementation, and Giraffe was another targetting Chess. I was very excited when the AlphaZero paper was published and was confident I'd be able to get decent results if I followed the same methodology.

Unfortunately, the problem facing me now is purely technical. My Chess2 engine is slow. On my computer, I'm only able to list valid moves at a rate of about 27 boards per second, and this doesn't include doing any actual evaluation on those boards (i.e. no "AI"). I estimate that at this speed I would need around 100 days of 24x7 training to achieve similar results to the AlphaZero paper (300k self-play games). And if there was a bug then I'd have to start over.

I think I need to make move generation at least 1,000x faster in order to have a real chance. Building the engine in Python seems to have been a pretty big failure here due to the speed issues. cPython might be faster, but isn't supported by TensorFlow. Multithreading isn't possible in Python, and multiprocess has too high of an overhead to enable the 1,000x speedup I'm looking for (the algorithm is very sensitive to latency).

Next steps on this project:

  1. Think more about neuroevolution, which may enable a multiprocess solution to work.
  2. Rewrite the engine in C++, then figure out how to build a paralellizable MCTS. This might require writing the neural network in C++ as well.
  3. Make the Chess 2 Online app better.


I haven't touched HTerminal in over 2 years, but I've started to think more about it recently. I stopped working on the project because I lost confidence that the approach I was taking was going to meaningfully enhance the terminal experience.

The current status of HTerminal is "generally working". It supports a few neat features and generally works as a standard console, although nowhere near as good as iTerm 2. The features that it adds aren't game-changing, though. I can show icons in ls, or put a "commit" button in the output of git status, but none of these things really make me materially more productive.

I have some ideas about how to improve HTerminal, which I'd like to write about soon. I think the core ideals of HTerminal are valid, like not replacing the shell and working over SSH; but I think the improvements that it brings to the table are lackluster presently. I'd like to incorporate some of the ideas from UpTerm into the project, but without compromising the ideals HTerminal has about backwards compatibility.

Next steps on this project:

  1. Think more about specific features that would improve my own productivity.
  2. Build them into HTerminal.

Dev Diaries - May 28, 2018

If you haven't heard of glish before, check out the post What is Glish?

Most of my work this week went into thinking about how glish programs will be run. Writing What is Glish? was part of that process and helped me crystalize what the main interfaces should be. There are basically 2 ways that glish programs will be run:

Embedded in a dedicated app. This is the main case. Some host program keeps track of domain specific items, like items and carts and discounts, and glish is brought in to control the interactions between those items. Glish manuals will define some well-known rulebooks and/or phrases, and the host program will call them.

As a standalone program. This is actually a special case of the dedicated app case. Here, the host program is very minimal, only looking for an initial rulebook export to call and then letting glish take over. In this mode, there is deliberately minimal sandboxing, so the glish manual can use code literals to do things like import other modules or interact with the system.

This means that the overall structure of an application that uses glish (either glish-node standalone or embedding programs) will look like this:

  • Core application logic, written in the native language, including code to load glish and call well-known rulebooks and phrases.
  • Domain-specific glish runtime, written in glish and bundled with the application. This should create the well-known rulebooks and create interfaces for any exposed native interfaces (for example, if your native code provides an applyDiscount method, this manual would set up the phrase, "To take (x - a number) from (c - a cart)").
  • Manual with business logic, also written in glish and loaded by the app at runtime. This will add rules to the rulebooks and defines the actual "business logic" that glish is responsible for.


Based on this, I've taken the approach that glish compiles to a CommonJS module, and there are some helpers to load the module with or without sandboxing. But what should the exports of these modules be?

  • Objects are difficult to export because glish property names don't make convenient JS identifiers, and because properties in glish are type checked at compile time, but the exposed properties should be type checked at run time. The generator will need to provide a wrapper with get/set methods for the properties.
  • Phrases are very easy to export because method calls already have runtime type checking. However, if objects are received/returned, they will need to be unwrapped/wrapped.
  • Variables could be exported similarly to object properties, but I'm not actually sure that exporting variables is a common enough use case to merit being built in to glish. It's trivial to manually create a getter/setter phrase and export that.
  • Rulebooks are the main use case for glish so exporting them should be built in. It's easy to create a syntactic sugar for exporting a "follow rulebook" phrase automatically. I don't think editing the rules of rulebooks will be a common use case, and again the user could manually create phrases to accomplish this quite easily.

Putting this together, here's a sample hello world manual and script:

To run the program (exported as `main`):
	say "Hello, world!".

To say (x - a value): `console.log(x)`.

const source = compile([ "stdlib.glish", "manual.glish");
const mod = loadModule(source, { console });

In the future

The progress this week was mostly conceptual rather than code, but was useful for helping me lay down the proper interfaces.

I need to move the compiler to using a visitor pattern. This will make it much easier for me to do things like static overload resolution, dead code removal, function inlining, and type checking; plus it will be easier to test specific parts of the generator. This will require me making the Program internal representation a little more homogenous that currently, and I need to think about how indexes will be generated as well, which will be affected.

My upcoming project list:

  • Create a board game app that can load glish manuals for the rules. I probably won't be able to finish this, but it should be a good highlight of the missing glish features.
  • Migrate the code generator to a visitor pattern; build a program index data structure.
  • Have the generator create wrappers for object types returned from phrases.
  • Continue working on the glish langauge (self-referential descriptions, etc).

What is Glish?

Glish is a programming language used to write manuals that other programs can use to influence their behavior. In glish, you don't write a program; you write write a manual, which is a set of rules that another program can reference.

An Example

Imagine that you are making an ecommerce application and need to support some discounts for various situations:

  • First-time buyers get a 10% discount.
  • Any accessory is 50% off with the purchase of a protection plan.
  • All accessories are on a 15% sale right now.

A glish manual to support these discounts might look like this:

Discount for an order from a first-time user:
	take 10% from the grand total.
Discount for an order containing a protection plan and an accessory:
	take 50% from the most expensive accessory in the order.
	take 15% from each accessory in the order.

Glish probably does not look like any other programming language you've seen. The syntax is very prose-like, but under the hood this syntax precisely compiles to a specific implementation. In that example, each of the three sentences describes a rule. Breaking down the first sentence, we have:

In the first sentence, "Discount for an order from a first-time user" forms the preamble of the rule. The first part, "Discount for", says that the rule will live among the discount rules; and the second part, "an order from a first-time user", is a description of an order. This particular glish manual understands what orders and users are, as well as that orders are "from" a user, and that users can be "first-time" or not. The remainder of the first sentence forms the body of the rule, and this particular manual understands that it can "take" some amount "from the grand total".

The program that wants to reference the glish manual needs to provide definitions for concepts like "orders", or what it means to "take from the grand total", but once that has been done then any manual that uses these concepts can be referenced.

Why Glish?

The main priorities for glish are:

  • To provide rulebooks for other programs to use. Rulebooks can be used for things like validating inputs, or for applying effects in response to certain circumstances.
  • To allow authors to describe complex rules easily. The condition under which a rule applies is built from simple pieces which can be composed together to express complex relationships.
  • To allow non-technical users to read and understand the code. Glish source is very English-like, and glish provides tools to diagnose the exact behavior of programs.

Glish is ideal in situations where the complexity is in the relationship between things, rather than in the things themselves. In the ecommerce example above, the core concepts in the application are orders, users, and items. Glish enters the picture to bring those concepts together.


Glish draws very heavily from Inform7, a programming language for writing interactive fiction. I first learned about Inform7 from a blog post by Eric Lippert, and was struck by the expressivity of the descriptions in the rule definitions. Once I started to dig in, I realized that the concepts used in Inform7 are useful in a variety of other contexts, which is why I set out to create a more general-purpose version of it. Along the way, I added additional flexibility to the language and removed some constructs that were specific to interactive fiction, but the language is similar in spirit as well as in syntax.

The name "glish" comes from a joke my high school English teacher used to say when we made grammatical mistakes. "You're not even writing full English, just 'glish!" Since glish looks like English, but is a smaller, quirkier subset, I thought the name was appropriate.

Dev Diaries - May 21, 2018

If you haven't heard of glish before, check out the post What is Glish?

This is the first in a series I'll be publishing on my experiences creating a new programming language. I've been working on Glish since late 2015. It's gone through several complete rewrites as I've learned more about how the eventual programming language should look, but this epoch is trending in a direction that might be workable. I recently started writing notes about my progress to help coalesce my thoughts and direct my development efforts. I typically do this at the end of any day I make a commit to the repo, so this is just a dump of those notes. As time moves on, I might add more cohesion to these updates.


First implementation of rulebooks done today. There are lots of problems:

  • no way to query the outcome of a rulebook
  • no named outcomes
  • rulebooks cannot have a basis
  • rulebooks cannot produce a value
  • the implementation is a massive pile of hand-written AST nodes and code literals
  • there is very little testing

I like the implementation where it creates a separate program to merge in, but there has to be an easier way to create the program.

Inform's rulebooks which produce values are allowed to "succeed producing value", "fail" and "make no decision". Additionally, rulebooks that "don't produce values" can set named outcomes and produce any of those outcomes as well as "make no decision".

Rulebooks need quite a bit of other work before I can really continue on them:

  • enumerated types are going to form the basis for named outcomes
  • loop constructs are going to be key to removing the code literals from the implementation


Was able to get a basic kind of either-or property implemented today. There are obviously problems.

The assertion now X is Y always makes a new instance, it doesn't understand asserting something about an existing value. I need to modify analyzeAssertion to somehow detect whether or not to allocate a new variable versus modify the properties on an existing one, including asserting new facts about a global variable.

Conditions can only check a single adjective because they are matching phrases rather than descriptions. I should modify the description matcher to take an "expected kind" to allow descriptions which don't include a kind.

All my properties are all checked with ===, so I need to make sure that default values get set properly.

I also noticed that the "variable initialized somewhere else" error doesn't show its position correctly in the error messsage.

I think the next course of action should be fixing up the first two problems here, then moving on to adjectives describing value properties.


Types for phrases of one arg and rulebooks are implemented fairly well now. Here are my thoughts on my next steps:

Enumerated properties are necessary for named rulebook outcomes, and named rulebook outcomes will require some thought as to the phrasing and structure they permit. Enumerated properties are also holding up my implementation of Bohnanza, because I use them for turn phases.

My Nim example doesn't work because process and require aren't exposed to the running code. I looked into how babel-node exposes require, and it ends up doing it through a require hook using pirates. This would require a bit of a refactor in the compiler, but shouldn't be too hard. I think I should probably focus on the entry points into glish programs before going on too much further.

The only other problem I can see right now is around descriptions. I want to be able to say "taking the actor" and have it match taking actions where the actor is the object.

I think my list of priorities should look like:

  • Implement this "taking the actor" self-referential description type.
  • Think about how glish programs will be run.

Setting up RetroPie

I wanted to play some old video games, and I had a Raspberry Pi B+ lying around, so I thought I would check out RetroPie and get that working.

Flashing an image to an SD card using OS X

I downloaded the image (I used retropie-v3.3.1-rpi1), flashed it to a 4GB SD card, and set up the Pi.

$ mount
# Looking for my SD card
/dev/disk2s1 on /Volumes/NO NAME (msdos, local, nodev, nosuid, noowners)
$ diskutil unmountDisk /dev/rdisk2
Unmount of all volumes on disk2 was successful
$ sudo dd bs=1m if=Downloads/torrent/retropie-v3.3.1-rpi1.img of=/dev/rdisk2
# ^T will show you the progress here
$ diskutil unmountDisk /dev/rdisk2
Unmount of all volumes on disk2 was successful

Finding your hard-wired Pi without keyboard access

I use a MacBook as my primary computer and don't have any other keyboard, so typing on the Pi was my first difficulty. The Pi may output its IP address during the bootup process, but I didn't catch it. Fortunately, this is pretty easy to work around for a hard-wired Pi: I can just use the default hostname. This wouldn't work if my Pi had been using wifi, because I would have needed to configure the SSID and password, but I'm fine since I am using a wire.

$ ping retropie.local
ping: cannot resolve retropie.local: Unknown host

In this case, the hostname didn't work. Maybe my router was misconfigured, or maybe this distribution doesn't support this feature. Still, we can use nmap to find the IP address, then ssh in using the default login (pi / raspberry):

$ nmap 10.0.1.\*

Starting Nmap 6.46 ( ) at 2016-01-01 18:24 PST

I'm going to be looking for something with ssh running. My Pi ended up being this one:

Nmap scan report for
Host is up (0.018s latency).
Not shown: 997 closed ports
22/tcp  open  ssh
139/tcp open  netbios-ssn
445/tcp open  microsoft-ds

Nmap done: 256 IP addresses (5 hosts up) scanned in 55.26 seconds

$ ssh -l pi
pi@'s password:
Linux retropie 4.1.13+ #826 PREEMPT Fri Nov 13 20:13:22 GMT 2015 armv6l

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sun Jul 20 17:08:12 2014

  .~~.   .~~.    Saturday,  2 January 2016,  2:26:08 am UTC
 '. \ ' ' / .'   Linux 4.1.13+ armv6l GNU/Linux
  .~ .~~~..~.
 : .~.'~'.~. :   Filesystem      Size  Used Avail Use% Mounted on
~ (   ) (   ) ~  /dev/root       2.3G  2.0G  168M  93% /
( : '~'.~.'~' : ) Uptime.............: 0 days, 00h05m18s
~ .~       ~. ~  Memory.............: 132632kB (Free) / 250352kB (Total)
 (   |   |   )   Running Processes..: 71
 '~         ~'   IP Address.........:
   *--~-~--*     Temperature........: CPU: 41°C/105°F GPU: 41°C/105°F
                 The RetroPie Project,

pi@retropie ~ $

Setting up AFP to connect to a Time Capsule

Now that I had the keyboard set up I proceeded to use the on-TV instructions to set up an XBox 360 wired controller. Then it was time to set up some games. My games are stored on a Time Capsule file server, so I will need to set up that service. I basically followed this guide.

pi@retropie ~ $ sudo service samba stop
pi@retropie ~ $ sudo update-rc.d samba disable
# These two disable the samba sharing service. This isn't necessary, but I
# am not going to use it.
pi@retropie ~ $ sudo apt-get install fuse afpfs-ng
pi@retropie ~ $ sudo usermod -aG fuse pi
# Log out and back in
pi@retropie ~ $ sudo chown root:fuse /dev/fuse
pi@retropie ~ $ sudo chmod 660 /dev/fuse
pi@retropie ~ $ sudo mkdir -p /mnt/TimeCapsule/Data
pi@retropie ~ $ sudo chown -R pi:pi /mnt/TimeCapsule/
pi@retropie ~ $ mount_afp afp://"Ryan Patterson":"REDACTED"@ /mnt/TimeCapsule/Data

Note for anyone following along here, I was getting an obnoxious error that was quite difficult to track down. The error was:

Mounting from Data on /mnt/TimeCapsule/Data
Unmounting volume Data from /mnt/TimeCapsule/Data
Unknown error 1, 1.

This error is very uninformative, but I was able to use strace to find out that the problem was related to not having the fusermount program installed. This is provided by the fuse package, so make sure to install that.

Trying again with CIFS

I spent a lot of time trying to set up the connection to the service using AFP, but the daemon was crashing and I decided to switch to using CIFS, which was much easier to set up as well.

pi@retropie ~ $ sudo service samba stop
pi@retropie ~ $ sudo update-rc.d samba disable
# These two disable the samba sharing service. This isn't necessary, but I
# am not going to use it.
pi@retropie ~ $ sudo mount -v -t cifs // /mnt/TimeCapsule/Data -o user="Ryan Patterson",pass="REDACTED",file_mode=0644,dir_mode=0755,sec=ntlm,uid=1000,gid=1000,noserverino

I also needed to add a line to my /etc/fstab so that the share would be mounted on reboot (\040 is how you escape a space in fstab):

// /mnt/TimeCapsule/Data cifs user=Ryan\040Patterson,pass=REDACTED,file_mode=0644,dir_mode=0755,sec=ntlm,uid=1000,gid=1000,noserverino 0 0

Using the scraper with CIFS

I had a collection of ROMs and wanted to get the titles / box art in EmulationStation. EmulationStation's builtin scraper is ludicrously slow and only does 1 game at a time however, so I searched around and found a faster program that lives in the RetroPie "experimental" section, called Sselph's Scraper. However, it has a problem where it doesn't follow symlinks, which means that my symlinked ROMs folder isn't scanned. The workaround is as simple as adding a trailing slash to the pathname and running the scraper manually.

/opt/retropie/supplementary/scraper/scraper -image_dir /home/pi/.emulationstation/downloaded_images/snes -image_path /home/pi/.emulationstation/downloaded_images/snes -output_file /home/pi/.emulationstation/gamelists/snes/gamelist.xml -rom_dir /home/pi/RetroPie/roms/snes/ -workers 4 -thumb_only -skip_check

Future ideas:

While I was working on this project, I kept a list of the things that could be better. Maybe this will give you an idea for how to improve RetroPie.

  • I'd like to fix the wired XBox 360 controller blinking LED. Looks like the kernel module (xpad) does not have LED support compiled in?
  • I want a power switch for the Pi so I don't have to unplug the device to turn it off. Ideally, I'd like to find out how to get something that allows wireless xbox 360 controllers to power on the device using the button.
  • The libretro GUI (rgui) is rather primitive, and doesn't seem to be able to save configuration changes in the latest release.
  • I would like to automatically restore my save state when I power on an emulator.
  • I'd like to upgrade to a Raspberry Pi 2 and get N64 emulation running.
  • The bootup time is pretty high. What could be shaved out to speed it up?