Shotwell not copying when importing

I use Shotwell for managing my photos and generally it works really well as a replacement for f-spot on my Ubuntu system. My work flow is to put photos in a folder ready to import, import them into Shotwell, and then empty the folder. I discovered very early on that it is important to copy photos rather than accept the default of linking to them otherwise they disappear from Shotwell after the input folder is cleared.

Unfortunately, after an upgrade, Shotwell stopped presenting the option of copy vs link and I imported anyway. Big mistake ;-). It seems to be important to locate the input folder (if using a work flow of add to input->import->clear input) outside of the folder Shotwell is storing its Photos under. E.g. if Shotwell is putting its images under ~/Pictures then don’t have your input folder anywhere like ~/Pictures/images to import.

Fixing the problem was a bit complicated because deleting the imported photos from Shotwell, relocating the input folder so it was outside the Shotwell storage path, and reimporting the images didn’t actually copy them even when the copy option was selected. It seems that Shotwell treats the image as already having been imported and so only does a link. Looking at the source code (yay for open source) gave some useful hints about how Shotwell was actually working esp

The solution was to delete all records from the PhotoTable table in ~/.local/share/shotwell/data/photo.db Sqlite database where the filename field started with the poorly-located input folder. Obviously I copied the database file first ;-). E.g.

DELETE FROM PhotoTable WHERE filename LIKE '/home/username/Pictures/0 images to import/%'

Reimporting worked as it should have after this step. “DB Browser for SQLite” is brilliant, BTW.

Shifting my git repo up a folder

My SOFA Statistics git repo was in a location that made sense at the time but became increasingly annoying. And I needed to restructure things anyway to prepare for snap packaging. Time for a shift.

Step 1 – shift existing .git folder and .gitignore
Step 2 – shift all the folders and content
Step 3 – git mv to relocate all folders and files relative to new .git location so git recognises that the files are the same but relocated i.e. will keep all the history

Dual Boot Ubuntu 16.04 on Win 10 Acer Aspire E15

The goal – to turn a Windows 10 laptop into a dual boot which retains Windows but makes Ubuntu 16.04 the primary OS. Work being performed on parents-in-law’s new laptop.

The solution overview:

  1. Make recovery disk (on USB)
  2. Shrink Windows partition from within Windows 10
  3. Downgrade BIOS
  4. Set supervisor password on BIOS
  5. Disable Secure Boot (but not UEFI)
  6. Make successful 64-bit, EFI-friendly USB
  7. Install Ubuntu alongside Windows 10
  8. Add Ubuntu efi files and change boot order so Ubuntu grub efi comes first
  9. Re-enable Secure Boot, enable F12 Boot Menu, remove supervisor password
  10. Set up Qualcomm Atheros QCA9377 Wireless Network Adapter.
  11. Misc

Make recovery disk (on USB)

Just in case. And you’ll need over 16GB so a 32GB USB should be right. Format as FAT32.

Open Windows, click on Windows button on screen (bottom left), All apps > Acer > Acer Recovery Management > Backup > Create Factory Default Backup. Tick “Back up system files to the recovery drive.”

Wait a long time while system backup prepared.

Shrink Windows partition from within Windows 10

From Windows button on screen > All apps > Windows Administrative Tools > Computer Management > Storage > Disk Management. Right-click on main NTFS partition and select Shrink. The empty space left over will be used by Ubuntu later.

Downgrade BIOS

This really matters. Without it I got black screen after getting USB startup disk to provide GRUB option. No amount of mucking around with nomodeset or noapic or noacpi helped. It was the BIOS!

I’ve repeated the content below in case the link ever disappears:

I figured out how to downgrade the bios.

Go to:

Search by Product Model:

Aspire E5-573G (EDIT – or whatever you have)
Change boot order so Ubuntu grub efi comes first

Select the right OS and download a bios. In my case I downloaded 1.15.

Run the ZRT_115.exe.

It will fail.

But before you close the installer, go to C:\Users\name\AppData\Local\Temp\

Search for a folder (random letters).tmp

There should be a H2OFFT-W.exe and zrt.rd file in there.

Just copy this folder and close the failing install.

In that copied folder, edit the platform.ini file.


;[nb4-version] +


;[nb4-version] +
;[nb4-version] +

The VersionFormat value now has ‘XN.DD’ instead of ‘XN.NN’.

This will ignore the fact that 1.25 -> 1.15 is a downgrade.

Prepare for a reboot. I.E. close unnecessary applications. Because it’ll happen automatically after running the installer.

Run H2OFFT-W.exe.
Set supervisor password on BIOS
Upon reboot, you’ll see a bios installing progress bar.

After that is done, press F2 during startup to get to bios. The version should now be 1.15.

At this point I set a password, turned off UEFI, and swapped my hard drive out for a fresh SSD. Ubuntu finally installed.

Set supervisor password on BIOS

The laptop has InsydeH20 BIOS Rev. 5.0 – access this by pressing F2 quickly after bootup. Then move to Security > Set Supervisor Password e.g. ‘a’ (we’ll be removing this later so a short password for convenience makes sense in this use case).

It is necessary to set this password (on Acer machines at least) to alter UEFI settings.

Disable Secure Boot (but not UEFI)

In the BIOS move to the Boot section and disable Secure Boot.

Don’t disable UEFI – that may have been necessary a few years ago but it probably causes more problems that it solves now (assuming it works at all). See

Make successful 64-bit, EFI-friendly USB

Download 64-bit ISO image. 32-bit apparently won’t work with UEFI.

I had trouble with unetbootin and the Ubuntu Startup Disk Creator. So I used good old dd to make my startup USB drive. This is probably what solved the “Missing operating system” problem I was having.

Format USB as FAT32 (maybe using Gparted or the general Gnome disk utility Disks).

sudo dd if=”/home/g/Downloads/ubuntu-16.04-beta2-desktop-amd64.iso” of=/dev/sdd

Note — won’t necessarily be sdd. Open Disks, select USB DISK, look at Device setting e.g. /dev/sdc1.

Install Ubuntu alongside Windows 10

In BIOS under Boot change boot order so USB HDD (that’s the USB stick actually) comes above Windows Boot Manager. Insert startup USB and reboot. Choose to install Ubuntu alongside Windows. I needed to do this with an ethernet cable plugged in given wireless wasn’t working for Ubuntu out of the box.

Add Ubuntu efi files and change boot order so Ubuntu grub efi comes first

In the BIOS, Security > Select an UEFI file as trusted for executing. Approve all ubuntu efi files. HDD0 > > > all the .efi files. I name them ubuntuorignameefi so they are easy to identify correctly and reorder in the Boot priority order section e.g. grubx64.efi -> ubuntugrubx64efi. Then in Boot > Boot priority order raise the grubx64 entry to the top.

I had 4 files to set (unlike 3 in some docs I found) – namely: grubx64.efi, fwupx64.efi, shimx64.efi, and MokManager.efi.

Re-enable Secure Boot, enable F12 Boot Menu, remove supervisor password

All straight forward. Remover supervisor password by setting it to an empty string by entering orig password then Enter (to register current password), Enter (to submit empty string as password, Enter again to confirm.

Set up Qualcomm Atheros QCA9377 Wireless Network Adapter.

Identify wireless first:


03:00.0 Network controller: Qualcomm Atheros Device 0042 (rev 30)

Installing the required driver was explained here by @chili555. As another grateful person said “You rock!”.

open a terminal and do:

sudo mkdir /lib/firmware/ath10k/QCA9377/
sudo mkdir /lib/firmware/ath10k/QCA9377/hw1.0

If it reports that the file already exists, that’s fine, just continue.

With a temporary working internet connection:


sudo apt-get install git
git clone
cd ath10k-firmware/QCA9377/hw1.0
sudo cp board.bin /lib/firmware/ath10k/QCA9377/hw1.0
sudo cp firmware-5.bin_WLAN.TF.1.0-00267-1 /lib/firmware/ath10k/QCA9377/hw1.0/firmware-5.bin
sudo modprobe -r ath10k_pci
sudo modprobe ath10k_pci

Your wireless should be working; if not, try a reboot.


I installed the system load indicator (set to traditional colours with Processor, Network, Harddisk), VLC, Shotwell (loading all photos with copy as the option), and brought thunderbird and firefox data over from old computer (merely copying xxxxxxx.default folders and updating profiles.ini Profile > Path settings. Then setting up icons on launcher and we’re done!

Launchpad – Bazaar to Git

I’ve stored my SOFA Statistics code on launchpad since 2009 and used bazaar (bzr) to do it. But a lot has changed since then and I know use git on a daily basis in my job. So I’d much rather use git for SOFA. Fortunately that is now possible on launchpad.

I found to be useful apart from the migration instructions. These didn’t work for me. For example, I had no luck installing sudo apt-get install bzr-fastimportInstead I found

Need ~/.bazaar/plugins

If plugins folder not there, cd ~/.bazaar; mkdir plugins

cd ~/.bazaar/plugins

bzr branch lp:bzr-fastimport fastimport

cd ~/projects/SOFA/sofastatistics/sofa.repo/sofa.main/

git init

bzr fast-export --plain . | git fast-import

gitk --all

YES! It’s all there

Archive .bzr in case

USER is launchpad-p-s in my case (yes – a strange choice which made sense at the time)

PROJECT is sofastatistics

So as per
[url "git+ssh://"]
insteadof = lp:

added the following to ~/.gitconfig

[url "git+ssh://"]
insteadof = lp:

Note – if not using lp: etc I had trouble with my ssh key – possibly something to do with confusion between user g and user launchpad-p-s.

I own my own project so to implement git remote add origin lp:PROJECT I ran:

git remote add origin lp:sofastatistics

Note: would only work if insteadof setting added to ~/.gitconfig as described earlier

Otherwise I would have to git remote add origin git+ssh://

As per I.e.

Confirmed by making a check folder then cloning the code in: git clone git://

Canon MG7160 on Ubuntu broken by update – solution

Another problem with my parents-in-law’s printer – it looks like an update broke the printer on two separate machines (an ancient desktop running Lubuntu 14.04 and an Asus Eee PC netbook running Ubuntu 14.04). Yet when I took the printer home it instantly worked with the default drivers on my Ubuntu 15.10 system there, albeit with the colours a bit off (see I fixed this by downloading and unpacking the driver at Then, from the terminal prompt I ran ./, answered some questions e.g. what I wanted the printer called, and voila – I had a working printer with the correct colours using Driver: CNMG7100.PPD.

Some things to note about setting up the printer:

1) the touchscreen on the top of the printer lets you swipe left and right to see more icons. The WiFi icon is where you can tell your printer the SSID and password/passphrase for you secured WiFi access point.

2) another setting lets you change the name of the printer e.g. to something user-friendly like MG7160.

3) on some routers ( in the case of the one my parents-in-law were using as opposed to on mine and or on many IIRC) you will be able to see the printer connected. Having given it a useful name in step 2) made this easier to spot. I could see it on my own router but not on my parents-in-law’s router for some reason but there did seem to be some connection. After I renamed the printer it was detected on the PC as a network printer under the new name so something was getting across.

BTW the error messages I was getting when trying to run the printer on my parents-in-law’s computers were inconsistent. More on that when I update this post from their place.

Messed up Thunderbird folders – sharks circling

We’ve all done it – messed up something so badly while trying to do something clever we’d sell our first-born just to get back to where we were (see And if we succeed at merely restoring the status quo, we’re pitifully grateful.

It all started with trying to find some lost photos that Shotwell couldn’t find the originals of. Presumably they had been linked to only and then the originals deleted leaving only the reference and the thumbnail behind. So here was the plan:

  1. Identify all email attachments which are images
  2. See if any of them have the same name as the missing images
  3. Open email based on date and sender to recover image

The good news is the plan worked for lots of the missing photos. Thanks to Python3, import mailbox, and import email. The bad news was when I opened Thunderbird the next day. The folder I had been working on was missing. So in addition to my missing photos I also had seemingly lost 1.8GB of emails.

tldr; 1) Close TB; 2) rename the missing folder in your file system and delete the .msf version; 3) Open TB; 4) Close TB; restore original name; 5) Open TB – success? Inspired by

Now back to the original problem. But I should probably run a full backup first.

Installing wireless USB modem driver on Ubuntu 15.10

It really was this simple:

How to install D-Link DWA-182 Wireless AC1200 Dual Band USB Adapter on Linux Ubuntu

Download zip from here:

rtl8812AU_8821AU_linux on GitHub

Unzip folder e.g. as “/home/mythbuntu/rtl8812AU_8821AU_linux-master”

cd "/home/mythbuntu/rtl8812AU_8821AU_linux-master"
make clean
sudo make uninstall
sudo make install

Restart and enable wireless and enter password when prompted (after selecting your own ssid).

Trouble copying audio file – until rdd-copy

I had trouble copying a audio file (wav) from a CD to my computer – the copy process was always getting stuck at exactly the same point 153MB in. And it didn’t matter whether I was using sound-juicer or nautilus. The answer was to install rdd (sudo apt-get install rdd). rdd copes with errors by supplying blanks (assuming multiple careful attempts to read the data have all failed) rather than halting.

The required command was rdd-copy src dest. But what to supply as src? I tried /media and similar but no luck. The final answer was ‘/run/user/1000/gvfs/cdda:host=sr0/Track 3.wav’. But how to find it? Just drag the file from nautilus to the terminal and see what is displayed there. The following worked even though it took a long time to get the file:

rdd-copy '/run/user/1000/gvfs/cdda:host=sr0/Track 3.wav' /home/g/Desktop/track3.wav

Simple flask app on heroku – all steps (almost)

Note – instructions assume Ubuntu Linux.

See Getting Started with Python on Heroku (Flask) for the official instructions. The instructions below tackle things differently and include redis-specific steps.

Don’t need postgresql for my app even though needed for heroku demo app. Using redis for simple key-value store.

Main reason for each step is indicated in bold at start. There are lots of steps but there are lots of things being achieved. And each purpose only requires a few steps so probably hard to streamline any further.

    >> sudo apt-get install python3 python3-pip python-virtualenv git ruby redis-server redis-tools
    Get free heroku account
    Install heroku toolbelt Heroku setup. Sets up virtualenvwrapper for you too (one less thing to figure out)
    Do the once-ever authentication
    >> heroku login
  5. APP
    Make project folder e.g.
    >> mkdir ~/projects/myproj
  6. APP
    >> cd ~/projects/myproj
    >> echo “web: python” > Procfile
    >> git init
    >> mkvirtualenv sticky

    So requirements for specific project can be separated from other project – lets heroku identify actual requirements. Normally “workon sticky” thereafter; deactivate to exit virtual env

  10. APP
    >> pip3 install flask
    Note – installed within virtualenv
  11. HEROKU
    Save the following as requirements.txt – needed by heroku so it knows the dependencies. Update version of redis as appropriate. gunicorn is a better approach than the flask test server
  12. HEROKU
    So we can use Python 3.4 instead of the current default of 2.7:
    >> echo “python-3.4.3” > runtime.txt
  13. APP & HEROKU

    Make a toy app to get started from.

    Note – modify the standard demo flask app to add a port to ease eventual heroku deployment. Otherwise the app will fail because of a problem with the port when running

    heroku ps:scale web=1

    Starting process with command `python`
    Web process failed to bind to $PORT within 60 seconds of launch

    Here is an example (may need updating if flask changes):

    import os
    from flask import Flask
    app = Flask(__name__)

    def hello():
        return "Hello World!"

    if __name__ == "__main__":
        port = int(os.environ.get("PORT", 33507))'', port=port)

    >> deactivate
  15. Make a module to make it easier to work with redis – let’s call it

    import os
    import urllib
    import redis

    url = urllib.parse.urlparse(os.environ.get('REDISTOGO_URL',
    redis = redis.Redis(host=url.hostname, port=url.port, db=0,

    We can then use redis like this:
    from store import redis

  16. APP
    Keep building app locally. The following is good for redis: Redis docs. And flasks docs are always good: Flask Docs – Minimal Application

    Before deploying to production:

    1. Update git otherwise you’ll be deploying old code – heroku uses git for deployment
    2. set app.debug to False (although no rush when just getting started and not expecting the app to get hit much)
    3. probably switch to gunicorn sooner or later (will need to change ProcFile to
      web: gunicorn main:app --workers $WEB_CONCURRENCY
    4. Example nginx.conf:

      # As long as /etc/nginx/sites-enable/ points to
      # this conf file nginx can use it to work with
      # the server_name defined (the name of the file
      # doesn't matter - only the server_name setting)
      # sudo ln -s /home/vagrant/src/nginx.conf ...
      #     ... /etc/nginx/sites-enabled/
      # Confirm this link is correct
      # e.g. less /etc/nginx/sites-enabled/

      server {
          listen 80;
          server_name localhost;

          location /static { # static content is

              # handled directly by NGINX which means
              # the nginx user (www-data) will need
              # read permissions to this folder
              root /home/vagrant/src;


          location / { # all else passed to Gunicorn to handle
              # Pass to wherever I bind Gunicorn to serve to
              # Only gunicorn needs rights to read, write,
              # and execute scripts in the app folders

    5. Example gunicorn.conf
      import multiprocessing

      bind = "" # ensure nginx passes to this port
      logfile = "/home/vagrant/gunicorn.log"
      workers = multiprocessing.cpu_count() * 2 + 1

  18. HEROKU
    >> heroku create

    Should now be able to browse to the url supplied as stdout fom command e.g. Note – not working yet – still need to deploy to new app

    >> git push heroku master

    Must then actually spin up the app:

    >> heroku ps:scale web=1

    A shortcut for opening is

    >> heroku open

  19. HEROKU
    Add redis support (after first deployment – otherwise

    ! No app specified.
    ! Run this command from an app folder or specify which app to use with --app APP.
    >> heroku addons:create redistogo

    Note – need to register credit card to use any add-ons, even if free ones. Go to

Some other points: when developing on a different machine, I needed to supply my public key to heroku from that other machine (Permission denied (publickey) when deploying heroku code. fatal: The remote end hung up unexpectedly).

heroku keys:add ~/.ssh/

And the full sequence for upgrading your app after the prerequisites have been fulfilled is:

  1. git commit to local repo
  2. Then git push to heroku
  3. Then run heroku ps:scale web=1 again

And I had a problem when I switched from Python 2 to 3 with redis – my heroku push wouldn’t work. By looking at the logs (>> heroku logs –tail) I found that import imap wouldn’t work and searching on that generally found I needed a newer version of redis than I had specified foolishly in requirements.txt.

F-spot vanished in Ubuntu 15.04 (Vivid)

F-spot has been removed from Ubuntu Vivid (15.04).

Dependency is not satisfiable: liblcms1 (>= 1.15-1)

None of the data for f-spot was gone, just the ability to run the application – probably something to do with mono library deprecation.

~$ find / -name f-spot 2>/dev/null

1.3MB in /home/g/.config/f-spot/photos.db

Anyway, opened Shotwell, “Import from Application”, “Import media from: F-Spot”, moving on.