Durable photo workflow

Ever since my kids were born I have accumulated thousands of digital happy-snaps and I have finally gotten to a point where I'm quite happy with my work-flow. I have always been extremely dubious of using any sort of external all-in-one solution to managing my photos; so many things seem to shut-down, cease development or disappear, all leaving you to have to figure out how to migrate to the next latest thing (e.g. Picasa shutting down). So while there is nothing complicated or even generic about them, there are a few things in my photo-scripts repo that might help others who like to keep a self-contained archive.

Firstly I have a simple script to copy the latest photos from the SD card (i.e. those new since the last copy -- this is obviously very camera specific). I then split by date so I have a simple flat directory layout with each week's photos in it. With the price of SD cards and my rate of filling them up, I don't even bother wiping them at this point, but just keep them in the safe as a backup.

For some reason I have a bit of a thing about geotagging all the photos so I know where I took them. Certainly some cameras do this today, but mine does not. I have a two-progned approach; I have a geotag script and then a small website easygeotag.info which quickly lets met translate a point on Google maps to exiv2 command-line syntax. Since I take a lot of photos in the same place, the script can store points by name in a small file sourced by the script.

Adding comments to the photos is done with perhaps the lesser-known cousin of EXIF -- IPTC. Some time ago I wrote python bindings for libiptcdata and it has been working just fine ever since. Debian's python-iptcdata comes with a inbuilt script to set title and caption, which is easily wrapped.

What I like about this is that my photos are in a simple directory layout, with all metadata embedded within the actual image files in very standarised formats that should be readable by anywhere I choose to host them.

For sharing, I then upload to Flickr. I used to have a command-line script for this, but have found the web uploader works even better these days. It reads the IPTC data for titles and comments, and gets the geotag info for nice map displays. I manually coralle them into albums, and the Flickr "guest pass" is perfect for then sharing albums to friends and family without making them jump through hoops to register on a site to get access to the photos, or worse, host them myself. I consider Flickr a cache, because (even though I pay) I expect it to shut-down or turn evil at any time. Interestingly, their AI tagging is often quite accurate, and I imagine will only get better. This is nice extra metadata that you don't have to spend time on yourself.

The last piece has always been the "hit by a bus" component of all this. Can anyone figure out access to all these photos if I suddenly disappear? I've tried many things here -- at one point I was using rdiff-backup to sync encrypted bundles up to AWS for example; but I very clearly found the problem in that when I forgot the keep the key safe and couldn't unencrypt any of my backups (let alone anyone else figuring all this out).

Finally Google Nearline seems to be just what I want. It's off-site, redundant and the price is right; but more importantly I can very easily give access to the backup bucket to anyone with a Google address, who can then just hit a website to download the originals from the bucket (I left the link with my other "hit by a bus" bits and pieces). Of course what they then do with this data is their problem, but at least I feel like they have a chance. This even has an rsync like interface in the client, so I can quickly upload the new stuff from my home NAS (where I keep the photos in a RAID0).

I've been doing this now for 350 weeks and have worked through some 25,000 photos. I used to get an album up every week, but as the kids get older and we're closer to family I now do it in batches about once a month. I do wonder if my kids will ever be interested in tagged and commented photos with pretty much their exact location from their childhood ... I doubt it, but it's nice to feel like I have a good chance of still having them if they do.

Australia, ipv6 and dd-wrt

It seems that other than Internode, no Australian ISP has any details at all about native IPv6 deployment. Locally I am on Optus HFC, which I believe has been sold to the NBN, who I believe have since discovered that it is not quite what they thought it was. i.e. I think they have more problems than rolling out IPv6 and I won't hold my breath.

So the only other option is to use a tunnel of some sort, and it seems there is really only one option with local presence via SixXS. There are other options, notably He.net, but they do not have Australian tunnel-servers. SixXS is the only one I could find with a tunnel in Sydney.

So first sign up for an account there. The process was rather painless and my tunnel was provided quickly.

After getting this, I got dd-wrt configured and working on my Netgear WNDR3700 V4. Here's my terse guide, cobbled together from other bits and pieces I found. I'm presuming you have a recent dd-wrt build that includes the aiccu tool to create the tunnel, and are pretty familiar with logging into it, etc.

Firstly, on dd-wrt make sure you have JFFS2 turned on for somewhere to install scripts. Go Administration, JFFS2 Support, Internal Flash Storage, Enabled.

Next, add the aiccu config file to /jffs/etc/aiccu.conf

# AICCU Configuration

# Login information
username USERNAME
password PASSWORD

# Protocol and server listed on your tunnel
protocol tic
server tic.sixxs.net

# Interface names to use
ipv6_interface sixxs

# The tunnel_id to use
# (only required when there are multiple tunnels in the list)
#tunnel_id <your tunnel id>

# Be verbose?
verbose false

# Daemonize?
daemonize true

# Require TLS?
requiretls true

# Set default route?
defaultroute true

Now you can add a script to bring up the tunnel and interface to /jffs/config/sixxs.ipup (make sure you make it executable) where you replace your tunnel address in the ip commands.

# wait until time is synced
while [ `date +%Y` -eq 1970 ]; do
sleep 5
done

# check if aiccu is already running
if [ -n "`ps|grep etc/aiccu|grep -v grep`" ]; then
aiccu stop
sleep 1
killall aiccu
fi

# start aiccu
sleep 3
aiccu start /jffs/etc/aiccu.conf

sleep 3
ip -6 addr add 2001:....:....:....::/64 dev br0
ip -6 route add 2001:....:....:....::/64 dev br0

sleep 5

#### BEGIN FIREWALL RULES ####
WAN_IF=sixxs
LAN_IF=br0

#flush tables
ip6tables -F

#define policy
ip6tables -P INPUT DROP
ip6tables -P FORWARD DROP
ip6tables -P OUTPUT ACCEPT

# Input to the router
# Allow all loopback traffic
ip6tables -A INPUT -i lo -j ACCEPT

#Allow unrestricted access on internal network
ip6tables -A INPUT -i $LAN_IF -j ACCEPT

#Allow traffic related to outgoing connections
ip6tables -A INPUT -i $WAN_IF -m state --state RELATED,ESTABLISHED -j ACCEPT

# for multicast ping replies from link-local addresses (these don't have an
# associated connection and would otherwise be marked INVALID)
ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-reply -s fe80::/10 -j ACCEPT

# Allow some useful ICMPv6 messages
ip6tables -A INPUT -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-request -j ACCEPT
ip6tables -A INPUT -p icmpv6 --icmpv6-type echo-reply -j ACCEPT

# Forwarding through from the internal network
# Allow unrestricted access out from the internal network
ip6tables -A FORWARD -i $LAN_IF -j ACCEPT

# Allow some useful ICMPv6 messages
ip6tables -A FORWARD -p icmpv6 --icmpv6-type destination-unreachable -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type packet-too-big -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type time-exceeded -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type parameter-problem -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type echo-request -j ACCEPT
ip6tables -A FORWARD -p icmpv6 --icmpv6-type echo-reply -j ACCEPT

#Allow traffic related to outgoing connections
ip6tables -A FORWARD -i $WAN_IF -m state --state RELATED,ESTABLISHED -j ACCEPT

Now you can reboot, or run the script, and it should bring the tunnel up and you should be correclty firewalled such that packets get out, but nobody can get in.

Back to the web-interface, you can now enable IPv6 with Setup, IPV6, Enable. You leave "IPv6 Type" as Native IPv6 from ISP. Then I enabled Radvd and added a custom config in the text-box to get DNS working with google DNS on hosts with:

interface br0
{
AdvSendAdvert on;
prefix 2001:....:....:....::/64
 {
 };
 RDNSS 2001:4860:4860::8888 2001:4860:4860::8844
 {
 };
};

(again, replace the prefix with your own)

That is pretty much it; at this point, you should have an IPv6 network and it's most likely that all your network devices will "just work" with it. I got full scores on the IPv6 test sites on a range of devices.

Unfortunately, even a geographically close tunnel still really kills latency; compare these two traceroutes:

$ mtr -r -c 1 google.com
Start: Fri Jan 15 14:51:18 2016
HOST: jj                          Loss%   Snt   Last   Avg  Best  Wrst StDev
1. |-- 2001:....:....:....::      0.0%     1    1.4   1.4   1.4   1.4   0.0
2. |-- gw-163.syd-01.au.sixxs.ne  0.0%     1   12.0  12.0  12.0  12.0   0.0
3. |-- ausyd01.sixxs.net          0.0%     1   13.5  13.5  13.5  13.5   0.0
4. |-- sixxs.sydn01.occaid.net    0.0%     1   13.7  13.7  13.7  13.7   0.0
5. |-- 15169.syd.equinix.com      0.0%     1   11.5  11.5  11.5  11.5   0.0
6. |-- 2001:4860::1:0:8613        0.0%     1   14.1  14.1  14.1  14.1   0.0
7. |-- 2001:4860::8:0:79a0        0.0%     1  115.1 115.1 115.1 115.1   0.0
8. |-- 2001:4860::8:0:8877        0.0%     1  183.6 183.6 183.6 183.6   0.0
9. |-- 2001:4860::1:0:66d6        0.0%     1  196.6 196.6 196.6 196.6   0.0
10.|-- 2001:4860:0:1::72d         0.0%     1  189.7 189.7 189.7 189.7   0.0
11.|-- kul01s07-in-x09.1e100.net  0.0%     1  194.9 194.9 194.9 194.9   0.0

$ mtr -4 -r -c 1 google.com
Start: Fri Jan 15 14:51:46 2016
HOST: jj                          Loss%   Snt   Last   Avg  Best  Wrst StDev
1.|-- gateway                    0.0%     1    1.3   1.3   1.3   1.3   0.0
2.|-- 10.50.0.1                  0.0%     1   11.0  11.0  11.0  11.0   0.0
3.|-- ???                       100.0     1    0.0   0.0   0.0   0.0   0.0
4.|-- ???                       100.0     1    0.0   0.0   0.0   0.0   0.0
5.|-- ???                       100.0     1    0.0   0.0   0.0   0.0   0.0
6.|-- riv4-ge4-1.gw.optusnet.co  0.0%     1   12.1  12.1  12.1  12.1   0.0
7.|-- 198.142.187.20             0.0%     1   10.4  10.4  10.4  10.4   0.0

When you watch what is actually using ipv6 (the ipvfoo plugin for Chrome is pretty cool, it shows you what requests are going where), it's mostly all just traffic to really big sites (Google/Google Analytics, Facebook, Youtube, etc) who have figured out IPv6.

Since these are exactly the type of places that have made efforts to get caching as close as possible to you (Google's mirror servers are within Optus' network, for example) and so you're really shooting yourself in the foot going around it using an external tunnel. The other thing is that I'm often hitting IPv6 mirrors and downloading larger things for work stuff (distro updates, git clones, image downloads, etc) which is slower and wasting someone else's bandwith for really no benefit.

So while it's pretty cool to have an IPv6 address (and a fun experiment) I think I'm going to turn it off. One positive was that after running with it for about a month, nothing has broken -- which suggests that most consumer level gear in a typical house (phones, laptops, TVs, smart-watches, etc) is either ready or ignores it gracefully. Bring on native IPv6!

Platform bootstrap for OpenStack Infrastructure

This is a terse guide on bootstrapping virtual-machine images for OpenStack infrastructure, with the goal of adding continuous-integration support for new platforms. It might also be handy if you are trying to replicate the upstream CI environment.

It covers deployment to Rackspace for testing; Rackspace is one of the major providers of capacity for the OpenStack Infrastructure project, so it makes a good place to start when building up your support story.

Firstly, get an Ubuntu Trusty environment to build the image in (other environments, like CentOS or Fedora, probably work -- but take this step to minimise differences to what the automated machinery of what upstream does). You want a fair bit of memory, and plenty of disk-space.

The tool used for building virtual-machine images is diskimage-builder. In short, it takes a series of elements, which are really just scripts run in different phases of the build process.

I'll describe building a Fedora image, since that's been my focus lately. We will use a fedora-minimal element -- this means the system is bootstrapped from nothing inside a chroot environment, before eventually being turned into a virtual-machine image (contrast this to the fedora element, which bases itself off the qcow2 images provided by the official Fedora cloud project). Thus you'll need a few things installed on the Ubuntu host to deal with bootstrapping a Fedora chroot

apt-get install yum yum-utils python-lzma

You will hit stuff like that python-lzma dependency on this road-less-travelled -- technically it is a bug that yum packages on Ubuntu don't depend on it; without it you will get strange yum errors about unsupported compression.

At this point, you can bootstrap your diskimage-builder environment. You probably want diskimage-builder from git, and then build up a virtualenv for your support bits and pieces.

git clone git://git.openstack.org/openstack/diskimage-builder
virtualenv dib_env
. dib_env/bin/activate

pip install dib-utils

dib-utils is a small part of diskimage-builder that is split-out; don't think too much about it.

While diskimage-builder is responsible for the creation of the basic image, there are a number of elements provided by the OpenStack project-config repository that bootstrap the OpenStack environment. This does a lot of stuff, including caching all git trees (so CI jobs aren't cloning everything constantly) and running puppet setup.

git clone git://git.openstack.org/openstack-infra/project-config

There's one more trick required for building the VHD images that Rackspace requires; make sure you install the patched vhd-util as described in the script help.

At this point, you can probably build an image. Here's something approximating what you will want to do

break=after-error \
TMP_DIR=~/tmp \
ELEMENTS_PATH=~/project-config/nodepool/elements \
DIB_DEV_USER_PASSWORD="password"  DIB_DEV_USER_PWDLESS_SUDO=1 \
DISTRO=23 \
./bin/disk-image-create -x --no-tmpfs -t vhd \
   fedora-minimal vm devuser simple-init \
   openstack-repos puppet nodepool-base node-devstack

To break some of this down

  • break= will help you debug failing builds by dropping you into a shell when one of the element parts fail.

  • TMP_DIR should be set to somewhere with plenty of space; a default /tmp with tmpfs is probably restricted to a couple of gigabytes; not enough for a build.

  • ELEMENTS_PATH will add the OpenStack specific elements

  • DIB_DEV_USER_* flags will create a devuser login with a password and sudo access. This is really important as it is most likely you'll boot up fairly broken the first time, and you need a way to log-in (this is not used in "production").

  • DISTRO in this case says to build a Fedora 23 based-image, but will vary depending on what you are trying to do.

  • disk-image-create finally creates the image. We are telling it to create a vhd based image, using fedora-minimal in this case. For configuration we are using simple-init, which uses the glean project to configure networking from a configuration-drive.

    • By default, this will install glean from pypi. However, adding

      ``DIB_INSTALL_TYPE_simple_init=repo``
      

      would modify the install to use the git source. This is handy if you have changes in there that are not released to pypi yet.

This goes and does its thing; it will take about 20 minutes. Depending on how far your platform diverges from the existing support, it will require a lot of work to get everything working so you can get an image out the other side. To see a rough example of what should be happening, see the logs of the official image builds that happen for a variety of platforms.

At some point, you should get a file image.vhd which is now ready to be deployed. The only reasonable way to do this is with shade. You can quickly grab this into the virtualenv we created before

pip install shade

Now you'll need to setup a clouds.yaml file to give yourself the permissions to upload the image. It should look something like

clouds:
  rax:
    profile: rackspace
    auth:
       username: your_rax_username
       password: your_rax_password
       project_id: your_rax_accountnum
    regions:
    - IAD

You should know your user-name and password (whatever you log into the website with), and when you login to Rackspace your project_id value is listed in the drop-down box labeled with your username as Account #. shade has no UI as such, so a simple script will do the upload.

import shade

shade.simple_logging(debug=True)
cloud = shade.openstack_cloud(cloud='rax')
image = cloud.create_image('image-name', filename='image.vhd', wait=True)

Now wait -- this will also take a while. Even after upload it takes a fair while to process (you will see the shade debug output looping around some glance calls seeing if it is ready). If everything works, the script should return and you should be able to see the new image in the output of nova list-images.

At this point, you're ready to try booting! One caveat is that the Rackspace web interface does not seem to give you the option to boot with a configuration drive available to the host, essential for simple-init to bring up the network. So boot via the API with something like

nova boot --flavor=2 --image=image-uuid --config-drive 1 test-image

This should build and boot your image! This will allow you to open a whole new door of debugging to get your image booting correctly.

You can now iterate by rebuilding/uploading/booting as required. Note these are pretty big images, and uploaded in broken-up swift files, so I find swift delete --all helpful to reset between builds (obviously, I have nothing else in swift that I want to keep). The Rackspace java-based console UI is rather annoying; it cuts itself off every time the host reboots. This makes it quite difficult to catch the early bootup, or modify grub options via the boot-loader, etc. You might need to fiddle timeout options, etc in the image build.

If you've managed to get your image booting and listening on the network, you're a good-deal of the way towards having your platform supported in upstream OpenStack CI. At this point, you likely want to peruse the nodepool configuration and get an official build happening here. Once that is up, you can start the process of adding jobs that use your platform to test!

Don't worry, there's plenty more that can go wrong from here -- but you're on the way! OpenStack infra is a very dynamic environment, which many changes in progress; so in general, #openstack-infra on freenode is going to be a great place to start looking for help.

Acurite 02032CAUDI Weather Station

I found an Acurite Weather Center 02032CAUDI at Costco for $99, which seemed like a pretty good deal.

It includes the "colour" display panel and a 5-in-1 remote sensor that includes temperature, wind-speed and direction, humidity and rain gauge.

The colour in the diplay is really just a fancy background sticker with the usual calculator-style liquid-crystal display in front. It does seem that for whatever reason the viewing angle is extremely limited; even off centre a little and it becomes very dim. It has an inbuilt backlight that is quite bright; it is either off or on (3-levels) or in "auto" mode, which dims it to the lowest level at certain hours. Hacking in a proximity sensor might be a fun project. The UI is OK; it shows indoor and outdoor temperature/humidity, wind-speed/rain and with is able to show you highs and lows with a bit of scrolling.

I was mostly interested in its USB output features. After a bit of fiddling I can confirm I've got it connected up to Meteobridge that is running on a Dlink DIR-505 and reporting to Weather Underground. One caveat is that you do need to plug the weather-station into a powered USB hub, rather than directly into the DIR-505; I believe because the DIR-505 can only talk directly to USB2.0 devices and not older 1.5 devices like the weather station. Another small issue is that the Meteobridge license is €65 which is not insignificant. Of course with some effort you can roll-your-own such as described in this series which is fun if you're looking for a project.

Luckily I had a mounting place that backed onto my small server cupboard, so I could easily run the cables through the wall to power and the DIR-505. Without this the cables might end up a bit of a mess. Combined with the fairly limited viewing angle afforded, finding somewhere practical to put the indoor unit might be one of the hardest problems.

Mounting the outdoor unit was fine, but mine is a little close to the roof-line so I'm not sure the wind-speed and direction are as accurate as if it were completely free-standing (I think official directions for wind-speed are something like free-standing 10m in the air). It needs to face north; both for the wind-direction and so the included solar-panel that draws air into the temp/humidity sensor is running as much as possible (it works without this, but it's more accurate with the fan). One thing is that it needs to mounted fairly level for the rain-gauge; it includes a small bubble-level on the top to confirm this. Firstly you'll probably find that most mount points you thought were straight actually aren't! Since the bubble is on the top, if you want to actually see it you need to be above it (obviously) which may not be possible if you're standing on a ladder and mounting it over your head. This may be a situation that inspires a very legitimate use of a selfie-stick.

It's a fun little device and fairly hackable for an overall reasonable price; I recommend.

On VMware and GPL

I do not believe any of the current reporting around the announced case has accurately described the issue; which I see as a much more subtle question of GPL use across API layers. Of course I don't know what the real issue is, because the case is sealed and I have no inside knowledge. I do have some knowledge of the vmkernel, however, and what I read does not match with what I know.

An overview of ESXi is shown below

overview of vmkernel and vmkapi

There is no question that ESXi uses a lot of Linux kernel code and drivers. The question as I see it is more around the interface. The vmkernel provides a well-described API known as vmkapi. You can write drivers directly to this API; indeed some do. You can download a SDK.

A lot of Linux code has been extracted into vmkLinux; this is a shim between Linux drivers and the vmkapi interface. The intent here is to provide an environment where almost unmodified Linux drivers can interface to the proprietary vmkernel. This means vendors don't have to write two drivers, they can re-use their Linux ones. Of course, large parts of various Linux sub-systems' API are embedded in here. But the intent is that this code is modified to communicate to the vmkernel via the exposed vmkapi layer. It is conceivable that you could write a vmkWindows or vmkOpenBSD and essentially provide a shim-wrapper for drivers from other operating systems too.

vmkLinux and all the drivers are GPL, and released as such. I do not think there could be any argument there. But they interface to vmkapi which, as stated, is an available API but part of the proprietary kernel. So, as I see it, this is a much more subtle question than "did VMware copy-paste a bunch of Linux code into their kernel". It goes to where the GPL crosses API boundaries and what is considered a derived work.

If nothing else, this enforcement increasing clarity around that point would be good for everyone I think.

Netgear CG3100D-2 investigation

The Netgear CG3100D-2 is the default cable-modem you get for Telstra Cable, at least at one time. Having retired it after changing service providers, I wanted to see if it was somewhat able to be re-purposed.

In short it's hackability is low.

First thing was to check out the Netgear Open Source page to see if the source had anything interesting. There is some source, but honestly when you dig into the platform code and see things like kernel/linux/arch/mips/bcm963xx/setup.c:

/***************************************************************************
 * C++ New and delete operator functions
 ***************************************************************************/

/* void *operator new(unsigned int sz) */
void *_Znwj(unsigned int sz)
{
    return( kmalloc(sz, GFP_KERNEL) );
}

/* void *operator new[](unsigned int sz)*/
void *_Znaj(unsigned int sz)
{
return( kmalloc(sz, GFP_KERNEL) );
}
...

there's a bit of a red-flag that this is not the cleanest code in the world (I guess it interfaces with some sort of cross-platform SDK written in some sort of C++).

So next we can open it up, where it turns out there are two separate UARTs as shown in the following image.

UART connections on Netgear CG3100D 2BPAUS

One of these is for the bootloader and eCOS environment, and the other seems to be connected to the Linux side.

A copy of the boot-logs for the bootloader and eCOS and Linux don't show anything particuarly interesting. The Linux boot does identify itself as Linux version 2.6.30-V2.06.05u while the available source lists its version as 2.6.30-1.0.5.83.mp2 so it's questionable if the source matches whatever firmware has made it onto the modem.

We do see that this identifies as a BCM338332 which seems to be one of the many sub-models of the BCM3383 SoC cable-modem solution. There is an OpenWrt wiki page that indicates support is limited.

Both Linux and eCos boot to a login prompt where all the usual default combinations of login/passwords fail. So my next thought was to try and get to the firmware via the bootloader, which has a simple interface

BCM338332 TP0 346890
Reset Switch - Low GPIO-18 50ms
MemSize:            128 M
Chip ID:     BCM3383G-B0

BootLoader Version: 2.4.0alpha14R6T Pre-release Gnu spiboot dual-flash reduced DDR drive linux
Build Date: Mar 24 2012
Build Time: 14:04:50
SPI flash ID 0x012018, size 16MB, block size 64KB, write buffer 256, flags 0x0
Dual flash detected.  Size is 32MB.
parameter offset is 49944

Signature/PID: a0e8


Image 1 Program Header:
   Signature: a0e8
     Control: 0005
   Major Rev: 0003
   Minor Rev: 0000
  Build Time: 2013/4/18 04:01:11 Z
 File Length: 3098751 bytes
Load Address: 80004000
    Filename: CG3100D_2BPAUS_V2.06.02u_130418.bin
         HCS: 1e83
         CRC: b95f4172

Found image 1 at offset 20000

Image 2 Program Header:
   Signature: a0e8
     Control: 0005
   Major Rev: 0003
   Minor Rev: 0000
  Build Time: 2013/10/17 02:33:29 Z
 File Length: 3098198 bytes
Load Address: 80004000
    Filename: CG3100D_2BPAUS_V2.06.05u_131017.bin
         HCS: 2277
         CRC: a6c0fd23

Found image 2 at offset 800000

Image 3 Program Header:
   Signature: a0e8
     Control: 0105
   Major Rev: 0002
   Minor Rev: 0017
  Build Time: 2013/10/17 02:22:30 Z
 File Length: 8277924 bytes
Load Address: 84010000
    Filename: CG3100D_2BPAUS_K2630V2.06.05u_131017.bin
         HCS: 157e
         CRC: 57bb0175

Found image 3 at offset 1000000

Enter '1', '2', or 'p' within 2 seconds or take default...
. .

Board IP Address  [0.0.0.0]:           192.168.2.10
Board IP Mask     [255.255.255.0]:
Board IP Gateway  [0.0.0.0]:
Board MAC Address [00:10:18:ff:ff:ff]:

Internal/External phy? (e/i/a)[a]
Switch detected: 53125
ProbePhy: Found PHY 0, MDIO on MAC 0, data on MAC 0
Using GMAC0, phy 0

Enet link up: 1G full


Main Menu:
==========
  b) Boot from flash
  g) Download and run from RAM
  d) Download and save to flash
  e) Erase flash sector
  m) Set mode
  s) Store bootloader parameters to flash
  i) Re-init ethernet
  p) Print flash partition map
  r) Read memory
  w) Write memory
  j) Jump to arbitrary address
  X) Erase all of flash except the bootloader
  z) Reset

Flash Partition information:

Name           Size           Offset
=====================================
bootloader   0x00010000     0x00000000
image1       0x007d0000     0x00020000
image2       0x007c0000     0x00800000
linux        0x00800000     0x01000000
linuxapps    0x00600000     0x01800000
permnv       0x00010000     0x00010000
dhtml        0x00200000     0x01e00000
dynnv        0x00040000     0x00fc0000
vennv        0x00010000     0x007f0000

The "read memory" seems to give you one byte at a time and I'm not certain it actually works. So I think the next step is solder some leads to dump out the firmware from the flash-chip directly, which is on the underside of the board. At that point, I imagine the passwords would be easily found in the image and you might then be able to leverage this into some sort of further hackability.

If you want a challenge and have a lot of time on your hands, this might be your platform — but practically I think the best place for this is the recycling bin.

rstdiary

I find it very useful to spend 5 minutes a day to keep a small log of what was worked on, major bugs or reviews and a general small status report. It makes rolling up into a bigger status report easier when required, or handy as a reference before you go into meetings etc.

I was happily using an etherpad page until I couldn't save any more revisions and the page got too long and started giving javascript timeouts. For a replacement I wanted a single file as input with no boilerplate to aid in back-referencing and adding entries quickly. It should be formatted to be future-proof, as well as being emacs, makefile and git friendly. Output should be web-based so I can refer to it easily and point people at it when required, but it just has to be rsynced to public_html with zero setup.

rstdiary will take a flat RST based input file and chunk it into some reasonable looking static-HTML that looks something like this. It's split by month with some minimal navigation. Copy the output directory somewhere and it is done.

It might also serve as a small example of parsing and converting RST nodes where it does the chunking; unfortunately the official documentation on that is "to be completed" and I couldn't find anything like a canonical example, so I gathered what I could from looking at the source of the transformation stuff. As the license says, the software is provided "as is" without warranty!

So if you've been thinking "I should keep a short daily journal in a flat-file and publish it to a web-server but I can't find any software to do just that" you now have one less excuse.

Bash arithmetic evaluation and errexit trap

In the "traps for new players" category:

count=0
things="0 1 0 0 1"

for i in $things;
do
   if [ $i == "1" ]; then
       (( count++ ))
   fi
done

echo "Count is ${count}"

Looks fine? I've probably written this many times. There's a small gotcha:

((expression))
The expression is evaluated according to the rules described below under ARITHMETIC EVALUATION. If the value of the expression is non-zero, the return status is 0; otherwise the return status is 1. This is exactly equivalent to let "expression".

When you run this script with -e or enable errexit -- probably because the script has become too big to be reliable without it -- count++ is going to return 0 (post-increment) and per above stop the script. A definite trap to watch out for!