Saturday, November 13, 2010

Netboot Server with Gentoo and AUFS

Abstract

This Howto describes the installation of a gentoo server for a netboot system that uses aufs for user write layers, logical volume management (lvm), raid (mdadm) and ubuntu as guest os. I assume, you start with nothing at all and have to install the server os first.

BOOT FROM CD

There are a number of different ways to install gentoo. Here we do it from scratch, as it will hopefully provide you with some understanding of what you are doing and how this system works. Fetch the systemrescuecd from the link, supplied below, burn it, put it into your drive and boot from it. You may also use your favorite installation disk, lest it includes lvm and mdadm. http://www.sysresccd.org/Download

SETUP DISK(s)

Partitions

We create a software raid using mdadm. So assuming we have two real disks, we will create two partitions, each. The first partition will be very small and is only needed for the /boot folder. Grub supports only version 1.0 of a mdadm raid, thats why we use –metadata=1.0. Also we use raid1, because grub does not support raid 10. The second partition will comprise the rest of available disk space and can for example be of raid type 10, so at any time, one disk may fail and we can recover our data. A raid 10 with two disks, thus behaves like raid 1. On top of this raid 10, we set up the logical volume manager and use logical partitions for data, while being flexible with space distribution, in case we add disks in the future. The setup of the logical partitions below is only a proposition that has proven practical. You may want to create a different setup. But you will want at least one extra partition for the guest os.
/dev/sda:
-/dev/sda1, set boot flag, >= 200mb (this will be the boot partition
-/dev/sda2 = rest (this will be our gentoo server and client system)
/dev/sdb: create EXACTLY the same layout as for /dev/sda

Software Raid

create your raid devices
mdadm –create /dev/md0 –level=1 –raid-devices=2 /dev/sda1 /dev/sdb1 –metadata=1.0
mdadm –create /dev/md1 –level=10 –raid-devices=2 /dev/sda2 /dev/sdb2

LVM

create lvm for “server” and “client” disk
vgcreate system /dev/md1
lvcreate -n server-root -L 20G system
lvcreate -n server-swap -L 4G system # this should be twice your ram size
lvcreate -n client-boot -L 200M system
lvcreate -n client-root -L 50G system # no need to save space here
lvcreate -n client-home -L 500G system # as much as you need

Filesystems

mkfs.ext2 /dev/md0
mkfs.ext4 /dev/system/server-root
mkswap /dev/system/server-swap
mkfs.ext2 /dev/system/client-boot
mkfs.ext4 /dev/system/client-root
mkfs.ext4 /dev/system/client-home

Mount

mkdir /mnt/gentoo
mount /dev/system/server-root /mnt/gentoo
mkdir /mnt/gentoo/boot
mount /dev/md0 /mnt/gentoo/boot
mkdir /mnt/gentoo/dev
mkdir /mnt/gentoo/proc
mount -t proc none /mnt/gentoo/proc

GENTOO INSTALLATION

We set up a basic gentoo installation. Nothing special here. Just follow the instructions and you’ll be fine. If you’re not familiar with gentoo, you can get a pretty good idea from the official gentoo howtos at the link below.
follow instructions from official handbook to obtain a stage and portage snapshot and unpack them http://www.gentoo.org/doc/en/handbook/handbook-x86.xml?part=1&chap=5 cp -L /etc/resolv.conf /mnt/gentoo/etc/
mount –bind /dev /mnt/gentoo/dev
chroot /mnt/gentoo /bin/bash
env-update; source /etc/profile
cp /usr/share/zoneinfo/Europe/Berlin /etc/localtime
nano /etc/locale.gen
en_US ISO-8859-1
en_US.UTF-8 UTF-8
de_DE ISO-8859-1
de_DE@euro ISO-8859-1
locale-gen
emerge –sync

INSTALL SERVER

Here we begin customization. Aside from standard tools, we install tftpd-hpa, which basically IS the netbootservice, mdadm and lvm. After the installation we add the services to default runlevel. We continue by creating directories, in which our guest operating system, ubuntu, will be installed and set up /etc/fstab accordingly. Feel free to choose different paths, if you like. Installation of nfs-utils should be clear. In /etc/hosts.allow we define, which machines are allowed to boot via network. You will probably have a different setup here. Use ip ranges according to your needs. /etc/hosts.deny is called only after hosts.allow so you might want to deny everything else. Then we set the hostname and path that tftpd will look for a kernel to boot via network. This is, where we will put the guest os /boot dir. Setup of mdadm follows. Here we specifiy, which disks go into which raid array. We use genkernel to create a kernel, ramdisk and modules for our purpose and then patch our kernel with aufs and set it to autoload. This is necessary, since aufs is not an official part of the kernel. After installing grub, we’re good to reboot.
eselect profile set 7
passwd # set a secure server password, for example 7531
emerge -av =gentoo-sources-2.6.34-r1
emerge -av sysklogd vixie-cron ssmtp ntp eix htop dhcpc openssh tftp-hpa mdadm grub genkernel
ACCEPT_KEYWORDS=”~x86″ emerge -av =sys-fs/lvm2-2.02.72
rc-update add sysklogd default; rc-update add vixie-cron default; rc-update add sshd default; rc-update add ntpd default; rc-update add ntp-client default
nano /etc/conf.d/net

config_eth0=( “dhcp” )
mkdir -p /tftpboot/static/root
mkdir -p /tftpboot/static/home
mkdir -p /tftpboot/static/boot
nano /etc/fstab

/dev/md0 /boot ext2 defaults 0 0
/dev/system/server-root / ext4 defaults 0 1
/dev/system/server-swap none swap sw 0 0
/dev/system/client-boot /tftpboot/static/boot ext2 defaults 0 0
/dev/system/client-root /tftpboot/static/root ext4 defaults 0 0
/dev/system/client-home /tftpboot/static/home ext4 defaults 0 0
none /proc proc defaults 0 0
USE=”selinux nonfsv4 tcpd” emerge -av nfs-utils
rc-update add nfs default
nano /etc/hosts.allow

ALL: 10.11.0.0/16
ALL: 10.10.0.0/16
ALL: 10.20.0.0/16
nano /etc/hosts.deny

ALL: (ALL)ALL
nano /etc/conf.d/hostname

HOSTNAME=”moros”
rc-update add nfs default
nano /etc/conf.d/in.tftpd

INTFTPD_PATH=”/tftpboot/static/boot”
rc-update add in.tftpd default
nano /etc/mdadm.conf

DEVICE /dev/sda*
DEVICE /dev/sdb*
ARRAY /dev/md0 metadata=1.0 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md1 metadata=1.1 devices=/dev/sda2,/dev/sdb2
genkernel –install –menuconfig –lvm –mdadm all

ACCEPT_KEYWORDS=”~x86″ USE=”nfs kernel-patch” emerge aufs2
nano /etc/modules.autoload.d/kernel-2.6

aufs
nano /boot/grub/grub.conf

default 0
timeout 30
title Gentoo Linux 2.6.34-r1
root (hd0,0)
kernel /boot/kernel-genkernel-x86-2.6.34-gentoo-r1 root=/dev/ram0 real_root=/dev/system/server-root domdadm dolvm
initrd /boot/initramfs-genkernel-x86-2.6.34-gentoo-r1
grub #this might take some time (7 min)
Grub>device (hd0) /dev/sda (/dev/hda for ide)
Grub>root (hd0,0) and then:
Grub>setup (hd0) # this might take some time
Grub>device (hd1) /dev/sdb (/dev/hdb for ide)
Grub>root (hd1,0) and then:
Grub>setup (hd1) # this might take some time
quit #this might take some time

———————–

DHCP INSTALLATION / SETUP

Since booting from the network requires the client to send broadcasts and listen for a response from a dhcp server that, rough said, contains the kernel to boot, we need to either install a new dhcp server or add a few lines of code to our existing server. The critical lines here are “next-server”, this is the ip address of you netboot server and “filename”. Just leave the filename as displayed. You will understand shortly. There are some possibilities to tell a client, which root path to use, respectively which nfs to mount as /. Since we use aufs to give every client the chance to customize the system in his own way thus having a shared base, that can be configured individually, we use different root paths for each machine. Each of those root paths consists of a static layer that comprises all the shared data and a writeable layer, in which per client data will be saved. Creation of these aufs filesystem follows. Use the information below to find your best way to set up dhcp.

emerge dhcp
nano /etc/conf.d/dhcp

INTERFACES=’”eth0″
nano /etc/dhcp3/dhcpd.conf
subnet 192.168.5.0 netmask 255.255.255.0 {
range 192.168.5.100 192.168.5.254;
option domain-name-servers 10.7.0.1;
option routers 192.168.1.253;
option broadcast-address 192.168.5.255;
default-lease-time 600;
max-lease-time 7200;
next-server 192.168.5.1;
##for each host
host 192.168.5.100 {
hardware ethernet 00:25:64:8e:16:c4;
fixed-address 192.168.5.100;
filename “pxelinux.0″;
option root-path “/tftpboot/dynamic/10.7.0.<ip>”; # this is perhaps the most sofisticated method to get your root fs mounted. see another possibility below }
————————

SETUP PXELINUX

Think of pxelinux as some kind of network-grub. We emerge syslinux but are interested only in one file: pxelinux.0. This is the file, you specified by “filename” in dhcpd.conf. It uses pxelinux.cfg and boot.txt that we will shortly create to display a menu with options on which kernel to boot. We define two options. The default is ubuntu and it is loaded after 3 seconds. The other, admin, has proven helpful if you want to install new programs. Being the admin you don’t want to install them in a per User layer, but in the shared base. To go into admin mode, hit some key early at network boot, type “admin” and hit enter.
mkdir -p /tftpboot/static/boot/pxelinux.cfg
emerge syslinux
cp /usr/share/syslinux/pxelinux.0 /tftpboot/static/boot
nano /tftpboot/static/boot/pxelinux.cfg/default

DISPLAY boot.txt
DEFAULT ubuntu

LABEL ubuntu
kernel /vmlinuz
append initrd=inittrd.img rw root=/dev/nfs ip=dhcp

LABEL admin
append initrd=initrd.img rw root=/dev/nfs nfsroot=10.11.2.2:/tftpboot/static/root ip=dhcp –

PROMPT 1
TIMEOUT 3
nano /tftpboot/static/boot/boot.txt

- Boot Menu -
=============
ubuntu
admin
#boot admin for refsys administration
rm /etc/udev/rules.d/70-persistent-net.rules
exit;reboot (now boot from harddisk)

INSTALL CLIENT SYSTEM AND CREATE NETBOOT KERNEL AND RAMDISK

Now you have to fetch a working ubuntu installation and stuff it into your shared nfs folder. There are several possibilities to get this done. I do it, by installing a normal ubuntu into a virtual machine and then use “tar” to create an archive, containing the whole filesystem, copy it to the server and unpack. The command is: $tar -cpP –absolute-names -f stage-ubuntu.tar /
copy stage-ubuntu.tar to server
on the server in the nfsroot: #tar -xpvf stage-ubuntu.tar

You could also try and run debootstrap and chroot. Configure your system as you wish. Install any packages. Do some customization. Whatever pleases you. When you’re done, we have to create a netboot ramdisk. Ubuntu comes with a nice tool to help us, create it. The ubuntu kernel and the created ramdisk will then be stored on the netboot server. We also rearrange the filesystem on the server a bit, since initially we created an extra partition for /home of the guest os. We configure /etc/fstab and our network interfaces $cp /boot/vmlinuz-`uname -r` /root/vmlinuz
nano /etc/initramfs-tools/initramfs-conf

modules=netboot
boot=nfs
$mkinitramfs -o /root/initrd.img
remember to set initramfs-conf back to

modules=most
boot=local
when finished run the tar command as explained above: #tar -cpP –absolute-names -f stage-ubuntu.tar /
copy your stage to your nfsroot /tftpboot/static/root and unpack, using:

$tar -xpvf stage-ubuntu.tar
$mv /tftpboot/static/root/home/* /tftpboot/static/home
$mv /tftpboot/static/root/root/* /tftpboot/static/boot
cp /etc/resolv.conf /tftpboot/static/root/etc
nano /tftpboot/static/root/etc/fstab

/dev/nfs / nfs rsize=8192,wsize=8192,noatime,async 0 0
192.168.5.1:/tftpboot/static/home/ /home nfs rsize=8192,wsize=8192,noatime,async 0 0
none /proc proc nodev,noexec,nosuid 0 0
none /tmp tmpfs defaults 0 0
nano /tftpboot/static/root/etc/network/interfaces

auto lo
iface lo inet loopback
#auto eth0
iface eth0 inet manual #this is important, otherwise system wont boot

SETUP FOLDERS AND MOUNTS AND EXPORTS ON THE SERVER

I provide a little script here, that you can use as an idea of how to setup your per-client root paths. We create a directory tree for every client machine that contains folders “root” and “tmpfs”, then call aufs to:
set /tftpboot/static/root as the read-only layer (we discussed this)
set /tftpboot/dynamic/<ip>/tmpfs as the write layer
and show it on /tftpboot/dynamic/<ip>/root
We then declare /tftpboot/dynamic/<ip>/root as an nfs export
—–snip——
#!/bin/bash

#get param
IP=$1

#create dirs
mkdir -p /tftpboot/dynamic/10.11.4.$IP/root
mkdir -p /tftpboot/dynamic/10.11.4.$IP/tmpfs

#aufs mount -t aufs -o br=/tftpboot/dynamic/10.11.4.$IP/tmpfs=rw:/tftpboot/static/root=ro none /tftpboot
/dynamic/10.11.4.$IP/root

echo “/tftpboot/dynamic/10.11.4.$IP/root 10.11.0.0/16(rw,async,fsid=$IP,no_subtree_check,no_root_squash,no_all_squash,no_acl)” >> /etc/exports

exportfs -r
——snap—–
That done, we delete /tftpboot/static/root/etc/udev/rules.d/70-persistent-net.rules
$sudo rm /tftpboot/static/root/etc/udev/rules.d/70-persistent-net.rules
That’s because this file contains a static binding of a network adapter that can be problematic, given the fact that many clients with different network adapters will boot this system. Restart the services and it’s done.

$/etc/init.d/nfs restart
$/etc/init.d/in.tftpd restart

RUN A NETBOOT CLIENT AND ENJOY

Remember, for administration, you probably want to make use of the admin mode we named earlier.

Author: Fabian Schütz

Debian from scratch on lvm2 and software raid

Debian usually comes with a great installer, that enables you to use menu-based configuration tools to setup many usefull features. Among them are also lvm2 and software-raid, using mdadm. But if you want to install Debian from scratch, using debootstrap, you have to setup these features youself and if you want a root partition on lvm and raid, you need to consider a few things, so your system will be able to boot.

Debian does not use any custom “boot flags”, as Gentoo does, where you specify “dolvm, domdadm” as kernel parameters in grub configuration, but offers a tool to create a ramdisk, suited for the job. $update-initramfs can be called via command line, but first, some settings need to be made. update-initramfs will read /etc/mdadm/mdadm.conf to retrieve configuration so the raid arrays can be assembled. Basically you want to have something like this

———snip———-
ARRAY /dev/md0 metadata=1.0 devices=/dev/sda1,/dev/sdb1
ARRAY /dev/md1 metadata=1.1 devices=/dev/sda2,/dev/sdb2
———snap———-
in your mdadm.conf.

Further more, update-initramfs will look into /etc/fstab and /boot/grub/menu.lst to gather information about the root device/partition. I fiddled a bit here, but in the end, it seemed, that devices, whose paths contain “mapper” are indentified as logical volumes, thus enabling lvm on boot. I tried
——————–
menu.lst
kernel /vmlinuz-xxx root=/dev/system/root ro quiet
——————–
fstab
/dev/system/root / ext3 defaults 0 1
——————–
first, but that didn’t work. So i put it this way
——————–
menu.lst
kernel /vmlinuz-xxx root=/dev/mapper/system-root ro quiet
——————–
fstab
/dev/mapper/system-root / ext3 defaults 0 1
——————–
and my system would boot. With “system” being my volume group, you basically need a path of this scheme: /dev/mapper/[volume-group]-[volume] in both /etc/fstab and /boot/grub/menu.lst.

That done, run either $update-initramfs -u if you want to update an existing ramfs or create a new one, using $update-initramfs -c -k . The version-label can is only a name and can entirely be made up.

Author: Fabian Schütz

Wednesday, October 6, 2010

New version of HTF: now backwards-compatible with HUnit

I’ve just uploaded version 0.5.0.0 of the Haskell Test Framework (HTF) to hackage. The new version allows for backwards-compatibility with the HUnit library. So, for example, say you have the following existing HUnit test:

test_fac = 
    do assertEqual "fac 0" 1 (fac 0)
       assertEqual "fac 3" 6 (fac 3)

To let the HTF collect your unit tests automatically, you just need to add the following line at the top of your source file:

{-# OPTIONS_GHC -F -pgmF htfpp -optF --hunit #-}

The pragma above specifies that the source file should be run through HTF’s preprocessor htfpp in HUnit-backwards-compatibility mode. The preprocessor attaches precise location information to all assertions and collects all unit tests and all QuickCheck properties in a fresh variable called allHTFTests.

If you start with your unit tests from scratch, you should leave out the -optF --hunit flag because it releaves you from providing location information such as "fac 0" and "fac 1" for your testcases by hand. The pragma should then look as follows:

{-# OPTIONS_GHC -F -pgmF htfpp #-}

See the HTF tutorial for more information.

Thanks to Magnus Therning, who convinced me to add the HUnit-backwards-compatibility layer to the HTF.

Author: Stefan Wehr

Tuesday, September 28, 2010

Speeding up your cabal builds - Part II

Last time, I blogged about how linking your binaries against an internal library might speed up your cabal builds. This time, I show how you can avoid building certain binaries at all.

In our company, we work at a rather large Haskell project. The cabal file specifies more then ten binaries, so it takes rather long to build all of them. But often, you only need one or two of these binaries, so building them all is a waste of time.

Unfortunately, cabal does not allow you to build only a subset of your binaries. One workaround is the set the buildable flag in your .cabal file to false for the binaries you don’t want to build. However, this approach is rather unflexible because you need to edit the .cabal file and do a cabal configure after every change.

The solution I present in this article allows you to specify the binaries to build as arguments to the cabal build command. For example, if you want to build only binary B, you invoke cabal as cabal build B and cabal only builds binary B.

To get this working, all you need to do is writing a custom Setup.hs file:

import Data.List
import System.Exit
import Control.Exception

import Distribution.Simple
import Distribution.Simple.Setup
import Distribution.PackageDescription hiding (Flag)
import Distribution.PackageDescription.Parse
import Distribution.Verbosity (normal)

_CABAL_FILE_ = "DociGateway.cabal"

-- enable only certain binaries (specified on the commandline)                                                                                   
myPreBuildHook ::  Args -> BuildFlags -> IO HookedBuildInfo
myPreBuildHook [] flags = return emptyHookedBuildInfo
myPreBuildHook args flags =
    do let verbosity = case buildVerbosity flags of
                         Flag v -> v
                         NoFlag -> normal
       descr <- readPackageDescription verbosity _CABAL_FILE_
       let execs = map fst (condExecutables descr)
           unbuildableExecs = execs \\ args
       mapM_ (checkExistingExec execs) args
       putStrLn ("Building only " ++ intercalate ", " args)
       return (Nothing, map (\e -> (e, unbuildable)) unbuildableExecs)
    where
      unbuildable = emptyBuildInfo { buildable = False }
      checkExistingExec all x =
          if not (x `elem` all)
             then do putStrLn ("Unknown executable: " ++ x)
                     throw (ExitFailure 1)
             else return ()

main = defaultMainWithHooks $ simpleUserHooks { preBuild = myPreBuildHook }

That’s all! Don’t forget the set the Build-Type in your .cabal file to Custom. I’ve tested this approach with cabal-install version 0.8.2, using version 1.8.6 of the Cabal library.

Happy hacking and have fun!

Author: Stefan Wehr

Thursday, August 26, 2010

Speeding up your cabal builds

Every waited too long for your cabal builds to finish? If that’s because you have multiple executable sections in your .cabal file, then there might be a solution.

By default, cabal rebuilds all relevant object files for each executable in separation. In other words, object files are not shared between executables. So if you have n executables and m source files, then cabal needs n * m compilation steps plus n link steps to rebuild the executables, no matter whethe any source file contributes to multiple executables.

Starting with cabal 1.8, there is a better solution, provided your executables have some source files in common. In this case, you might build a library from these common source files and then link the executables against the library. In the example above, if all n executables use the same set of m source files, then you end up with m compilation steps plus n + 1 link steps. Sounds good, doesn’t it?!

Here is a simple .cabal file that demonstrates how linking against an internal library works:

Name:                test
Version:             0.1
Synopsis:            test package for linking against internal libraries
Author:              Stefan Wehr
Build-type:          Simple
Cabal-version:       >=1.8 -- IMPORTANT

Library
  Hs-source-dirs: lib -- IMPORTANT
  Exposed-modules: A
  Build-Depends: base >= 4

Executable test-exe
  Build-depends: base >= 4, test, -- link against the internal library
  Main-is: Main.hs -- imports A
  Hs-source-dirs: prog  -- IMPORTANT

There are some things to consider:

  • The Cabal-Version must be greater or equal 1.8.
  • The library and the executable must not use common source directories, otherwise the compiler does not pick the library but recompiles the source files.
  • The library must be mentioned in the Build-depends of the executable

Running cabal build now gives the following output:

Building test-0.1...
[1 of 1] Compiling A                ( lib/A.hs, dist/build/A.o )
Registering test-0.1...
[1 of 1] Compiling Main             ( prog/Main.hs, dist/build/test-exe/test-exe-tmp/Main.o )
Linking dist/build/test-exe/test-exe ...

No rebuilding of A when compiling Main!!!

This feature of cabal isn’t mentioned in the manual, at least I didn’t find it. Further, there seems to be no changelog for cabal. I found out about this feature by browsing the bug tracker for cabal. Is there a better way to get informed of new features of cabal?

Note: I successfully tested this with cabal-install version 0.8.2 (cabal library 1.8.0.4). I couldn’t get it to work with cabal-install version 0.8.0.

Author: Stefan Wehr

Tuesday, August 3, 2010

Cross-Compiling DLLs with Linux

When working with a Linux-driven work environment, it is nice to be able to also compile your Windows projects under Linux. One question that arises is how to compile DLLs. Thankfully this is very straighforward process using MinGW.

Creating the DLL

Simply create a file example_dll.c with the fitting header example_dll.h
#include "example_dll.h"

int example_function(int n) {
return n*42;
}

#ifndef EXAMPLE_DLL_H__
#define EXAMPLE_DLL_H__

int example_function(int n);

#endif

Then just compile it with:
$> i586-mingw32msvc-gcc -shared example_dll.c -o example.dll

and viola, you have your DLL ready to use. This is just a simple example DLL, but with this method it is possible to create full-blown DLLs with thousands of lines of code. When you keep your code clean and platform-independet you can compile the same code into a shared library for Linux and a DLL for Windows and even link against other dynamic libraries like OpenSSL or libcurl, though it is advisable to use GNU Automake and GNU Libtool when creating larger projects to ease the hassle of the growing command lines, especially because of different options for Windows and Linux. GNU Automake will take care of all that automatically, also when cross-compiling.

Using the DLL

Using the DLL is just as you would expect it. In this example just create a file use_dll.c with following content:

#include <stdio.h>
#include "example_dll.h"

int main() {
int res = example_function(13);
printf("%d should be %d!\n", res, 13*42);

return 0;
}

Then your program compiles as simple as this, ready to use on any Windows system:
$> i586-mingw32msvc-gcc use_dll.c example.dll -o example.exe

Using GNU Automake

Creating DLLs with GNU Automake and GNU Libtool isn't difficult either. With your working Automake setup, simply add the macro
AC_LIBTOOL_WIN32_DLL
to your configure.ac and GNU Libtool will create clean DLLs for your project when configured for cross-compiling.

Using the DLL with MSVC

To link the DLL against a project in MSVC you will have to generate a .lib file, and for that you will have to generate a .def file. So when compiling on your Linux machine just add the following parameter to your gcc comandline:
-Wl,--output-def,example.def
which will tell the linker to output the .def file as example.def. Then on your Windows machine with a installation of some kind of MSVC compiler execute following command:
lib /machine:i386 /def:example.def
to compile the .def into a .lib which you can then link against in your project. Don't forget to do this step every time your API changes...

Author: Jonathan Dimond

Tuesday, July 27, 2010

Windows Phone 7

Im Herbst - spätestens jedoch zum Weihnachtsgeschäft - erscheinen die ersten Geräte mit Windows Phone 7, dem Nachfolger von Windows Mobile. Windows Mobile war in letzter Zeit überwiegend nur noch im Unternehmensbereich - Integration mit Exchange, aber auch als Plattform für Industrie-Geräte - im Einsatz, während kaum noch Consumer-Geräte verkauft wurde und vom iPhone ausgelöste Trend zu Touchscreen-Geräten verpasst wurde.

Was ist Windows Phone 7?

Für Microsoft markiert Windows Phone 7 einen Umbruch gegenüber dem bisherigen Windows Mobile. Unter der Haube versteckt sich zwar "nur" eine neue Version von Windows CE (neuer Name: Windows Embedded), aber third-party Anwendungen stehen gänzliche neue bzw. von anderen Plattformen bekannte APIs zur Verfügung. Für das laufende fiskale Jahr hat Microsoft zwei große Projekte: Windows Phone 7 und "Cloud Computing" (für Microsoft Windows Live, Zune und andere, proprietäre Dienste).

Kennzeichen des Umbruchs:
  1. mit hohen und genauen Hardwareanforderungen:
    • mindestens Gigahertz-Prozessor
    • eigener Grafikprozessor
    • nur zwei unterstützte Auflösungen
    • nur drei Gehäusetypen (nur Touchscreen, Touchscreen mit Slider, eventuell Riegel-Handy)
  2. soll der bisherige "Wildwuchs" mit einer riesigen Anzahl an sehr unterschiedlichen Geräten durch wenige, qualititativ hochwertige, mindetsens anfangs hochpreisige und besser zu vermarktende Modelle ersetzt werden.
  3. keine Abwärtskompatibilität zu Windows Mobile
  4. gänzlich neue Benutzeroberfläche ("Metro"), die nach Usability-Gesichtspunkten wie guter Lesbarkeit und Verständlichkeit entwickelt wurde und deshalb - zugespitzt: bis auf die Farbgebung - keine Möglichkeit, die Benutzeroberfläche anzupassen (kein "HTC Sense" oder "Motorola Blur" in der bisherigen Form möglich)
  5. gemeinsame APIs für verschiedene Plattformen:
    • Silverlight für die Entwicklung für Desktop und Windows Phone
    • XNA für die Spiele-Entwicklung für  Windows Phone, Xbox und Desktop
    Die Entwicklung ist nur darüber möglich, das heißt einzige Sprachen C# und .NET und kein direkter Zugriff auf die Hardware.
  6. Verbindung zur "Cloud": mit der Windows Live ID Integration mit Location-Services, Bing, Bing Maps und anderen Diensten von Microsoft
  7. Anwendungen sind ausschließlich über Marketplace installierbar (ausgenommen für Entwickerzwecke)

Windows Phone 7 im Detail

Oberfläche


Im Zentrum des Produkts soll die "User Experience" stehen: der Benutzer soll Spaß an der Bedienung haben und die Bedienung soll einfach und selbsterklärend sein. Wie von anderen Geräten bekannt gibt es eine Startseite, durch die sich nur vertikal scrollen läßt und die ein starres Kachel-Raster ("Tiles") hat. In das Raster kann der Nutzer Verknüpfungen auf Anwendung ziehen, die - ähnlich der Widgets von Android - auch veränderliche Informationen anzeigen können.

Daneben (über horizontales Scrollen erreichbar) gibt es die Anwendungsliste, die als erstes aus sechs "Hubs" besteht: Kontakte, Office, Bilder, Audio & Video, Market und Spiele. Diese "Hubs" (Beispiel Office) sollen alltägliche Fragestellungen sinnvoll bündeln, so zum Beispiel unter Kontakte auch direkt Infos aus sozialen Netzwerken wie Facebook darstellen. Microsoft kritisiert, daß es beispielsweise für mehrere Fragestellungen, die Kontakte betreffen, eigene Apps gibt: Adreßbuch, Facebook, Mitteilungen, ... dies alles wird bei Windows Phone im Hub "Kontakte" zusammengefasst.

Hinzu kommt eine enge Integration mit Microsoft-Programmen und Diensten wie Office, Exchange und Web-Services. Insbesondere die Office- und Exchange-Integration war schon ein Kennzeichen von Windows Mobile 6.5 und wird weiter erhalten bleiben. Office scheint nun jedoch wirklich "vom Handy aus" gedacht zu sein und nicht einfach nur vom PC auf das Handy geholt, wie es bei den Vorgängerversionen den Eindruck hatte.

Marketplace


Der Marketplace bekommt - analog zu AppStore und Androide Market - eine große Bedeutung und wird voraussichtlich ab August 2010 geöffnet sein, damit schon vor Erscheinen der ersten Geräte Anwendungen zur Verfügung stehen. Um im Market Anwendungen veröffentlichen zu können, muss man pro Jahr 99 Dollar zahlen. Für jede verkaufte App kassiert Microsoft 30% des Preises. Bei kostenpflichtigen Anwendung ist das für die Zertifizierung und damit Veröffentlichung obligatorische Testen durch Microsoft kostenlos. Jedes Market-Mitglied darf jedoch nur maximal fünf Mal im Jahr eine kostenlose Anwendung testen lassen. Jeder weitere Test kostet dann 20 Dollar.

Die Tests beinhalten auch eine Inhaltskontrolle und sind damit verhältnismäßig streng (vergleichbar mit Apple, aber anders als bei Android). Zusammen mit den hohen Preisen sollen damit zu viele kostenlose (und damit scheinbar "wertlose") oder generell "sinnlose" Anwendungen, aber auch unerwünschte Inhalte im Market verhindert werden.

Missing Features

Im ersten Release wird einiges fehlen, was zum Teil - aber eben nur zum Teil - nachgeliefert wird. Microsoft scheint unter massivem Zeitdruck zu stehen und sagt offen, daß sie zwar auch einfach vieles nicht aus Windows Mobile übernehmen, aber vieles nicht hinbekommen, um im Herbst ein Release hinzubekommen.
  • kein Multitasking, keine Services: eine third-party Anwendung kann nicht im Hintergrund laufen; Ausnahme: wenn ein Anfruf reinkommt oder für eine bestimmte Aufgabe ein System-Task aufgerufen wird; es wird dementsprechend kein Skype für Windows Phone 7 geben
  • keine Raw-Sockets: es werden für third-party-Anwendungen nur Web-Protokolle wie Http, Https usw. unterstützt
  • third-party Anwendungen laufen in streng isolierten Sandboxes und können keine anderen Anwendungen aufrufen oder Daten oder Bibliotheken mit anderen Anwendungen teilen; nur einige, ausgewählte System-Funktionen und System-Apps lassen sich aus der eigenen App heraus aufrufen
  • kein direkter Zugriff auf Sensoren wie die Kamera; keine "augmented Reality"-Anwendung möglich und auch Fotos nur über Starten der Kamera-App und dann Rückgabe des damit gemachten Fotos möglich. Auch das Speichern und Lesen des Adressbuches passiert über das Aufrufen der Kontakt-App
  • auch im Business-Bereich vorerst nur Deployment über Market möglich

Entwicklungsumgebung

Die komplette, nur für Windows erhältliche Entwicklungsumgebung ist kostenlos - und damit auch Tools, die für andere Plattformen Geld kosten wie Visual Studio 2010 oder Expression Blend (für die Oberflächenentwicklung).

Microsoft möchte vor allem mit dieser sehr umfangreichen und durchdachten Entwicklungsumgebung Entwickler gewinnen. Vom sehr umfangreichen Visual Studio, mit dem man in der kostenpflichtigen Version leicht Anwendungen sowohl für Web, Windows-Desktop als auch Windows Phone deployen kann, über einfach zu bedienende UI-Builder bis hin zur Möglichkeit, einen bedienbaren und leicht per Mail verschiebbaren UI-Protoyp zu entwerfen hat Microsoft viel Energie und Ideen in die Anwendungsentwicklung gesteckt. Mit der XNA-Integration sollen Xbox-Entwickler gewonnen werden, die damit die Möglichkeit haben, ohne Umlernen Spiele für Windows Phone 7 zu schreiben.

Die Kehrseiten:

Es gibt keine Möglichkeit, nativ für das darunter unter Windows Phone liegende Windows CE zu entwickeln oder direkt auf die Hardware zuzugreifen. Jede Anwendung setzt auf Silverlight und/oder XNA und damit C# und .NET auf.

Vieles ist zudem noch nicht fertig. Zwar soll das Release des SDK und dann gleichzeitig die Eröffnung des Marketplace noch im Sommer folgen, aber der Emulator ist noch buggy und es fehlen grundlegende UI-APIs wie ein "Scroll-View". Paradoxer Weise unterstützt der an das Windows Phone angepasste IE 7 kein Silverlight (und Anfangs auch kein Flash). An diesen Beispielen zeigt sich der enorme Zeitdruck - und damit auch der Druck, entweder im Weihnachtsgeschäft erfolg zu haben oder es sein zu lassen. Möglicherweise wird die Weiterentwicklung und damit auch die Umsetzung der "Missing Features" vom Erfolg abhängen.

Fazit

Die Vorteile und Meilensteine für Microsoft sind die neue Benutzeroberfläche und die integrierte Anwendungsentwicklung mit einfachen, modernen APIs wie Silverlight und XNA für mehrere Plattformen - Web, Windows Desktop, Xbox und Windows Phone - zugleich. Insbesondere letzteres wird Anwendungsentwicklern, insbesondere Xbox-Spiele-Entwicklern, den Einstieg leicht machen. Preis dafür ist ein schwieriger Umstieg für alle, die für Windows Mobile oder ganz andere Plattformen entwickelt haben: man muss sich auch hier auf die Microsoft-Welt einlassen.

Viele Anbieter von Lösungen auf Basis von Windows Mobile werden den Umstieg auf Windows Phone vorerst nicht mitmachen: die fehlende Abwärtskompatibilität verhindert einen leichten Umstieg und die Bedingungen an die Hardware sowie fehlende APIs machen bestimmte Lösungen aus dem Industriebereich erstmal unmöglich.

Allerdings sind heutzutage auch im Unternehmens- und Industrie-Bereich moderne und einfach zu bedienende mobile Geräte gefragt. Die neuen Benutzeroberflächen von iPhone, Android oder Microsoft sind deshalb nicht nur für den Privatanwender interessant. Gerade dort, wo günstige Geräte und größere Freiheit für die Entwicklung gefragt sind, könnte Android zukünftig Marktanteile gewinnen.

Aus Marketing-Sicht bringt Microsoft damit in erster Linie einen weiteren IPhone-Konkurrenten heraus - der sich, abgesehen von der Benutzeroberfläche, nicht sehr von seinem Vorbild abhebt, dafür jedoch an einigen Punkten (kein Multitasking, vorerst kleiner Marketplace, keine Möglichkeit zur Tablet-Entwicklung mit der bisherigen Plattform, ebenso unfrei mit Blick auf Deployment und Zugriff auf das OS wie beim iPhone-OS, für Updates und Sync Zune-Dekstop-Anwendung nötig) das Nachsehen bziehungsweise erhebliche Einschränkungen hat. Vieles wird von den ersten Geräten mit dem neuen Betriebssystem abhängen.

Links

Autor: Dirk Spöri

Thursday, July 22, 2010

MVars in Objective-C

MVars (mutable variables) are a well-known synchronization primitive in the functional programming language Haskell (see here for the API documentation). An MVar is like a box, which can be either empty or full. A thread trying to read from an MVar blocks until the MVar becomes full, a thread writing to an MVar blocks until the MVar becomes empty.

Recently, I had the need for MVars in Objective-C. I’m sure, I could have solved the problem with other synchronization mechanisms from Apple’s API, but as we all know, programmers are too lazy to read API documentation and programming MVars in Objective-C is fun anyway. I started with this simple interface for MVars:

@interface MVar : NSObject {
  @private
    NSCondition *emptyCond;
    NSCondition *fullCond;
    id value;
    BOOL state;
}
// Reads the current value from the MVar, blocks until a value is available.
- (id)take;
// Stores a new value into the MVar, blocks until the MVar is empty.
- (void)put:(id)val;
// Creates an MVar that is initial filled with the given value.
- (id)initWithValue:(id)val;
@end
Here is a trivial nonsense program that uses the MVar interface to solve the producer-consumer problem:
#import "MVar.h"
#define N 1000
@implementation MVarTest
- (void)producer:(MVar *)mvar {
    for (NSInteger i = 0; i < N; i++) {
        [mvar put:[NSNumber numberWithInteger:i]];
    }
}

- (void)consumer:(MVar *)mvar {
    for (NSInteger i = 0; i < N; i++) {
        NSNumber *n = [mvar take];
        // do something with n
    }
}

- (void)main {
    MVar *mvar = [[[MVar alloc] init] autorelease];
    [NSThread detachNewThreadSelector:@selector(producer:)
                       toTarget:self withObject:mvar];
    [self consumer:mvar];
}
@end

Let's come back to the implementation of MVars. The condition variables emptyCond and fullCond signal that the MVar is empty/full. The variable state stores the stateof the MVar (empty/full). With this in hand, the actual implementation of the MVar class is straightforward:

#import "MVar.h"
#import "Common.h";

#define STATE BOOL
#define EMPTY NO
#define FULL YES

@interface MVar ()
@property (nonatomic,retain) id value;
@end

@implementation MVar

// Notifies waiting threads that the state of the MVar has changed.
- (void)signal:(STATE)aState {
    NSCondition *cond = (aState == FULL) ? fullCond : emptyCond;
    [cond lock];
    self->state = aState;
    [cond signal];
    [cond unlock];
}

- (id)take {
    [fullCond lock];
    while (state != FULL) {
        [fullCond wait];
    }
    id res = self.value;
    self.value = nil;
    [fullCond unlock];
    [self signal:EMPTY];
    return res;
}

- (void)put:(id)aValue {
    [emptyCond lock];
    while (state != EMPTY) {
        [emptyCond wait];
    }
    self.value = aValue;
    [emptyCond unlock];
    [self signal:FULL];
}

// Creates an MVar that is initially empty.
- (id)init {
    if ((self = [super init])) {
        self->emptyCond = [[NSCondition alloc] init];
        self->fullCond = [[NSCondition alloc] init];
        [self signal:EMPTY];
    }
    return self;
}

- (id)initWithValue:(id)aValue {
    self = [self init];
    [self put:aValue];
    return self;
}

- (void)dealloc {
    [emptyCond release];
    [fullCond release];
    [value release];
    [super dealloc];
}

@synthesize value;
@end

Please let me know if you find any bugs in the code shown in this article.

Happy hacking and have fun!

Author: Stefan

Wednesday, March 17, 2010

DPM: Darcs Patch Manager

I’ve just released the initial version of DPM on Hackage! The Darcs Patch Manager (DPM for short) is a tool that simplifies working with the revision control system darcs. It is most effective when used in an environment where developers do not push their patches directly to the main repository but where patches undergo a reviewing process before they are actually applied. Here is a short story that illustrates how would use the DPM in such sitations.

Suppose that Dave Developer implements a very cool feature. After polishing his patch, Dave uses darcs send to send the patch:

  $ darcs send host:MAIN_REPO
  Tue Mar 16 16:55:09 CET 2010  Dave Developer <dave@example.com>

    * very cool feature
  Shall I send this patch? (1/1)  [ynWsfvplxdaqjk], or ? for help: y
  Successfully sent patch bundle to: patches@example.com

After the patch has been sent to the address patches@example.com, DPM comes into play. For this example, we assume that mail devivery for patches@example.com is handled by some mailfilter program such as maildrop (http://www.courier-mta.org/maildrop/) or procmail (http://www.procmail.org/). The task of the mailfilter program is the add all patches sent to patches@example.com to the DPM database. This is achieved with the DPM command add:

  $ dpm add –help
  add: Put the given patch bundles under DPM’s control (use ‘-’ to read from stdin).
  Usage: add FILE…

  Command options:

  Global options:
    -r DIR  –repo-dir=DIR                  directory of the darcs repository
    -s DIR  –storage-dir=DIR               directory for storing DPM data
    -v      –verbose                       be verbose
            –debug                         output debug messages
            –batch                         run in batch mode
            –no-colors                     do not use colors when printing text
            –user=USER                     current user
            –from=EMAIL_ADDRESS            from address for emails
            –review-address=EMAIL_ADDRESS  email address for sending reviews
    -h, -?  –help                          display this help message

Now suppose that Dave’s patch is in the DPM database. A reviewer, call him Richard Reviewer, uses the DPM command list to see what patches are available in this database:

  $ dpm list –help
  list: List the patches matching the given query.

  Query ::= Query ‘ + ‘ Query  — logical OR
          | Query ‘ ‘   Query  — logical AND
          | ‘^’ Query          — logical NOT
          | ‘{‘ Query ‘}’      — grouping
          | ‘:’ Special
          | String

  Special is one of "undecided", "rejected", "obsolete", "applied",
  "reviewed", "open", or "closed", and String is an arbitrary sequence
  of non-whitespace characters not starting with ‘^’, ‘{‘, ‘}’, ‘+’, or ‘:’.

  If no query is given, DPM lists all open patch groups.

  Usage: list QUERY …

  Command options:

  Global options:
    -r DIR  –repo-dir=DIR                  directory of the darcs repository
    -s DIR  –storage-dir=DIR               directory for storing DPM data
    -v      –verbose                       be verbose
            –debug                         output debug messages
            –batch                         run in batch mode
            –no-colors                     do not use colors when printing text
            –user=USER                     current user
            –from=EMAIL_ADDRESS            from address for emails
            –review-address=EMAIL_ADDRESS  email address for sending reviews
    -h, -?  –help                          display this help message

In our example, the output of the list command might look as follows:

  $ dpm -r MAIN_REPO -s DPM_DB list
    very cool feature [State: OPEN]
      7861 Tue Mar 16 17:20:45  2010 Dave Devloper <dave@example.com>
           State: UNDECIDED, Reviewed: no
           added
    some other patch [State: OPEN]
      7631 Tue Mar 16 13:15:20  2010 Eric E. <eric@example.com>
           State: REJECTED, Reviewed: yes
           added
    …

(The -r option specifies a directory containing the DPM database. Initially, you simply create an empty directory. The -s option specifies the path to the darcs repository in question.)

DPM groups all patches with the same name inside a patch group. Patch groups allow keeping track of multiple revisions of the same patch. In the example, the patch group of name very cool feature has only a single member, which is the patch Dave just created. The patch is identified by a unique suffix of its hash (7861 in the example). The output of the list command further tells us that no reviewer decided yet what to do with the patch (its in state UNDECIDED).

At this point, Richard Reviewer reviews Dave’s patch. During the review, he detects a minor bug so he rejects the patch:

  $ dpm -r MAIN_REPO -s DPM_DB review 7861
    Reviewing patch 7861
    Starting editor on DPM_DB/reviews/2010-03-16_7861_swehr_24166.dpatch
      <inspect patch in editor>
    Mark patch 7861 as reviewed? [Y/n] y
    Patch 7861 is in state UNDECIDED, reject this patch? [y/N] y
    Enter a comment: one minor bug
    Marked patch 7861 as reviewed
    Moved patch 7861 to REJECTED state
    Send review to Dave Developer <dave@example.com>? [Y/n] y
    Mail sent successfully.

Now Dave Developer receives an email stating that has patch has been rejected. The email also contains the full review so that Dave sees why the patch has been rejected. Thus, Dave starts fixing the bug, does an amend-record of the patch, and finally sends the patch again. (Alternatively, he could also create a new patch with exactly the same name as the original patch.)

  $ darcs send MAIN_REPO
  Tue Mar 16 16:55:09 CET 2010  Dave Developer <dave@example.com>
    * very cool feature
  Shall I send this patch? (1/1)  [ynWsfvplxdaqjk], or ? for help: y
  Successfully sent patch bundle to: patches@example.com

Once the email is received, the improved patch is added to the DPM database. The output of the list command now looks like this:

  $ dpm -r MAIN_REPO -s DPM_DB list
    very cool feature [State: OPEN]
      2481 Tue Mar 16 17:50:23  2010 Dave Devloper <dave@example.com>
           State: UNDECIDED, Reviewed: no
           added
      7861 Tue Mar 16 17:20:45  2010 Dave Devloper <dave@example.com>

           State: REJECTED, Reviewed: yes
           marked as rejected: one minor bug
    some other patch [State: OPEN]
      7631 Tue Mar 16 13:15:20  2010 Eric E. <eric@example.com>
           State: REJECTED, Reviewed: yes
           added
    …

The patch 2481 is the improved revision of the original patch 7861. It is in the same group as the original patch because both patches have the same name. Richard Reviewer reviews the improved patch and has no complains anymore:

  $ dpm -r MAIN_REPO -s DPM_DB review 2481
    Reviewing patch 2481
    Starting editor on DPM_DB/reviews/2010-03-16_2481_swehr_876102.dpatch
      <inspect patch in editor>
    Mark patch 2481 as reviewed? [Y/n] y
    Patch 2481 is in state UNDECIDED, reject this patch? [y/N] n
    Enter a comment: ok
    Marked patch 2481 as reviewed
    Send review to Dave Developer <dave@example.com>? [y/N] n

At this point, Richard Reviewer applies the patch with the very cool feature:

  $ dpm apply 2481
    About to apply patch 2481
    Entering DPM’s dumb (aka interactive) apply command.
    Future will hopefully bring more intelligence.

    Instructions:
    =============
    – Press ‘n’ until you reach
      Tue Mar 16 17:50:23  2010 Dave Devloper <dave@example.com>

        * very cool feature
      (Hash: 20100316162041-c71f4-871aedab8f4dd3bd042b9188f1496011c7dd2481)
    – Press ‘y’ once
    – Press ‘d’

    Tue Mar 16 17:50:23  2010 Dave Devloper <dave@example.com>
      * very cool feature
    Shall I apply this patch? (1/1)  [ynWsfvplxdaqjk], or ? for help: y
    Finished applying…
    Patch 2481 applied successfully
    Send notification to author Dave Developer <dave@example.com> of patch 2481? [Y/n] y
    Mail sent successfully.

Applying a patch closes the corresponding patch group. Per default, the list command doesn’t display closed patch groups, but we can force it to do so with the :closed query:

  $ dpm list :closed
    very cool feature [State: CLOSED]
      2481 Tue Mar 16 17:50:23  2010 Dave Devloper <dave@example.com>

           State: APPLIED, Reviewed: yes
           marked as applied: -
      7861 Tue Mar 16 17:20:45  2010 Dave Devloper <dave@example.com>
           State: REJECTED, Reviewed: yes
           marked as rejected: one minor bug
      …

Author: Stefan Wehr

Tuesday, March 16, 2010

HTF: a test framework for Haskell

After nearly 5 years of inactivity, I've finally managed to upload a new version of the Haskell Test Framework (HTF) to Hackage. The HTF is a test framework for the functional programming language Haskell. The framework lets you define unit tests (http://hunit.sourceforge.net), QuickCheck properties (http://www.cs.chalmers.se/~rjmh/QuickCheck/), and black box tests in an easy, uniform and convenient way. The HTF uses a custom preprocessor that collects test definitions automatically. Furthermore, the preprocessor allows the HTF to report failing test cases with exact file name and line number information.

Here's a short tutorial on how to use the HTF. It assumes that you are using GHC for compiling your Haskell code. (It is possible to use the HTF with other Haskell environments, only the steps taken to invoke the custom preprocessor of the HTF may differ in this case.) Note that a hyperlinked version of this tutorial will shortly be available on http://hackage.haskell.org/package/HTF.

Suppose you are writing a function for reversing lists:

myReverse :: [a] -> [a]
myReverse [] = []
myReverse [x] = [x]
myReverse (x:xs) = myReverse xs

To test this function using the HTF, you first create a new source file with a OPTIONS_GHC pragma in the first line.

{-# OPTIONS_GHC -F -pgmF htfpp #-}

This pragma instructs GHC to run the source file through htfpp, the custom preprocessor of the HTF. The following import statements are also needed:

import System.Environment ( getArgs )
import Test.Framework

The actual unit tests and QuickCheck properties are defined like this:

test_nonEmpty = do assertEqual [1] (myReverse [1])
assertEqual [3,2,1] (myReverse [1,2,3])

test_empty = assertEqual ([] :: [Int]) (myReverse [])

prop_reverse :: [Int] -> Bool
prop_reverse xs = xs == (myReverse (myReverse xs))

When htfpp consumes the source file, it replaces the assertEqual tokens (and other assert-like tokens, see Test.Framework.HUnitWrapper) with calls to assertEqual_, passing the current location in the file as the first argument. Moreover, the preprocessor collects all top-level definitions starting with test_ or prop_ in a test suite with name allHTFTests of type TestSuite.

Definitions starting with test_ denote unit tests and must be of type Assertion, which just happens to be a synonym for IO (). Definitions starting with prop_ denote QuickCheck properties and must be of type T such that T is an instance of the type class Testable.

To run the tests, use the runTestWithArgs function, which takes a list of strings and the test.

main = do args <- getArgs
runTestWithArgs args reverseTests

Here is the skeleton of a .cabal file which you may want to use to compile the tests.

Name:          HTF-tutorial
Version: 0.1
Cabal-Version: >= 1.6
Build-type: Simple

Executable tutorial
Main-is: Tutorial.hs
Build-depends: base, HTF

Compiling the program just shown (you must include the code for myReverse as well), and then running the resulting program with no further commandline arguments yields the following output:

Main:nonEmpty (Tutorial.hs:17)
*** Failed! assertEqual failed at Tutorial.hs:18
expected: [3,2,1]
but got: [3]

Main:empty (Tutorial.hs:19)
+++ OK

Main:reverse (Tutorial.hs:22)
*** Failed! Falsifiable (after 3 tests and 1 shrink):
[0,0]
Replay argument: "Just (847701486 2147483396,2)"

* Tests: 3
* Passed: 1
* Failures: 2
* Errors: 0

(To check only specific tests, you can pass commandline arguments to the program: the HTF then runs only those tests whose name contain at least one of the commandline arguments as a substring.)

You see that the message for the first failure contains exact location information, which is quite convenient. Moreover, for the QuickCheck property Main.reverse, the HTF also outputs a string represenation of the random generator used to check the property. This string representation can be used to replay the property. (The replay feature may not be useful for this simple example but it helps in more complex scenarios).

To replay a property you simply use the string representation of the generator to define a new QuickCheck property with custom arguments:

prop_reverseReplay =
withQCArgs (\a -> a { replay = read "Just (10603948072147483396,2)"})
prop_reverse

To finish this tutorial, we now give a correct definition for myReverse:

myReverse :: [a] -> [a]
myReverse [] = []
myReverse (x:xs) = myReverse xs ++ [x]

Running our tests again on the fixed definition then yields the desired result:

Main:nonEmpty (Tutorial.hs:17)
+++ OK

Main:empty (Tutorial.hs:19)
+++ OK

Main:reverse (Tutorial.hs:22)
+++ OK, passed 100 tests.

Main:reverseReplay (Tutorial.hs:24)
+++ OK, passed 100 tests.

* Tests: 4
* Passed: 4
* Failures: 0
* Errors: 0

The HTF also allows the definition of black box tests. See the documentation of the Test.Framework.BlackBoxTest module for further information.

Author: Stefan Wehr