Nov 08

To access Bluetooth Low Energy (aka Bluetooth Smart) devices from an iOS or Android device it’s useful to know a bit of related terminology first.

GAP (Generic Access Profile) is concerned with scanning and connecting to devices. GATT (Generic Attribute Profile) comes to play when you have connected to a device, and want to access its services and characteristics.

In the GAP context, the iOS/Android device usually acts as the central, while the BLE accessory is the peripheral. Note that the roles described here are conceptual, i.e. an iOS/Android device can act as a peripheral as well (with limitations), which can be really useful when e.g. generating data for testing.

When scanning for peripherals (i.e. listening to advertisements from peripherals), there are two specialized roles: observer and broadcaster. The broadcaster sends relevant information in its advertisement package to any observer that might be listening. In other words, a broadcaster doesn’t care how many (if any) observers there are listening to it’s advertisements at any given time. The iOS/Android device normally acts as the observer, and the BLE accessory is the broadcaster.

To send custom data to any device that may be listening, you can write such information to e.g. the manufacturer data field of the advertisement package. When only sending and listening advertisement packages (potentially with context-specific data), no connection needs to be established between the devices.

Be wary that both certain fields of the advertisement package may get cached. For example, the local name field tends to get cached on Android, and can therefore not reliably be used for broadcasting information that is bound to change.

The basic procedure for setting up a BLE connection is roughly as follows:

1. Scan devices advertising specific services
2. Connect to a device
3. Discover service(s)
4. Discover characteristic(s)
5. Register for characteristic notifications

To connect your central to a peripheral (steps 1 & 2) in order to e.g. read and write data only between the two devices, you start the connection process by scanning for peripherals that provide a specific service (or services) you require. In a viable situation where there are multiple peripherals advertising themselves in the vicinity of a central device, you may want to send customized identifier in the advertisement package (therefore fulfilling the broadcaster role, see above) so that the central can observe the sent identifiers and distinguish a particular peripheral from others.

Steps 3 to 5 are covered by GATT, which comes into play when you have actually established a connection with a BLE peripheral. Once connected, you can scan the peripheral’s services and their related characteristics.

A BLE device profile consists of one or more services, each of which can have one or more characteristics. A common case is that there’s one service, which has one characteristic for reading, and one for writing. Services are basically a logical collection of read/write characteristics. Each service and characteristic is identified by a 16/128-bit unique identifier. The 16-bit identifiers are defined by Bluetooth SIG to ensure/encourage interoperability between BLE devices as needed, and 128-bit indentifiers are available for building customized services/characteristics. Each characteristic can also have descriptors that can describe the characteristic’s value, set minimum/maximum limits, or whatever you may need. Dealing with descriptors is normally not necessary, except in one particular case on Android that will be covered later.

You can read a characteristic’s current value, or register to get notified when the value changes. Notifications allow you to get updates whenever they happen, instead of polling the current value repeatedly. When writing to a characteristic, it’s possible to get confirmation of a successful write operation, assuming the characteristic has been configured to support this on the peripheral.

iOS Implementation Details

Communication with BLE devices on iOS is handled using the Core Bluetooth framework. You start the connection process by initializing an instance of CBCentralManager, supplying a CBCentralManagerDelegate as argument to the initializer. When initialization is complete, Core Bluetooth calls your delegate’s centralManagerDidUpdateState() method, where you can take further action as needed.

In case everything is OK, CBCentralManager’s state property will be PoweredOn. If, however, the user has Bluetooth turned off (state PoweredOff), they will be presented at this point with a dialog requesting to turn it on.

After initializing the CBCentralManager instance, you can start to scan for devices by first setting up a CBCentralManager delegate, and then starting the scan by calling CBCentralManager‘s scanForPeripheralsWithServices() method, where you pass an array of CBUUID objects representing BLE services the scanned peripherals must provide. Often there is just one service, but more complex devices may have more.

Once a matching peripheral is found, Core Bluetooth calls the didDiscoverPeripheral() delegate method. (Note that the actual function name is much more convoluted, mostly due to legacy reasons, but this and other delegates are commonly known by these shortened forms.) Here you can read the discovered device’s advertising data. In case your required data is already contained in the advertisement packet, you can extract it here and continue to listen to further advertisements. Advertisement data can also contain information you need to figure out if you want to connect to the discovered device. Note that you need to store the CBPeripheral object locally, otherwise it gets deinitialized.

To initiate connection, call the connectPeripheral method of your CBCentralManager. When connected, Core Bluetooth calls your central manager delegate’s didConnectPeripheral() method, where you can set up a CBPeripheralManager delegate and start discovering services on the peripheral by calling discoverServices() on the CBPeripheral object.

Once service discovery is complete, Core Bluetooth calls your CBPeripheralDelegate‘s didDiscoverServices() method. Once you have discovered the service you’re interested in, you’ll want to discover its characteristics by calling discoverCharacteristics() on the CBPeripheral object. Once complete, you’ll get a call to the didDiscoverCharacteristicsForService() method. Here you can e.g. register to be notified on changes to a readable characteristic’s value, or store a writable characteristic for further use.

To listen to notifications when a readable characteristic’s value changes, call setNotifyValue() on the CBPeripheral object, with the characteristic as argument. Note that for this to work the characteristic must be configured to support notifications on the peripheral. When a characteristic’s value is subsequently updated, Core Bluetooth will call your CBPeripheralDelegate‘s didUpdateValueForCharacteristic() method. You can read the current value of the characteristic from the CBCharacteristic parameter’s value property as NSData.

Writing data to a characteristic is done by calling the writeValue() method on the CBPeripheral object. You can choose to either get a write result (in case the peripheral supports it), or ignore it. In the former case, your CBPeripheralManager‘s didWriteValueForCharacteristic() gets called with the result of the write operation.

Android Implementation Details

Bluetooth LE on Android is a bit of a wild west, especially when dealing with proprietary Bluetooth stack implementations, but here’s the general idea on how to get started using the public API. First you need to request both BLUETOOTH and BLUETOOTH_ADMIN permissions in your app’s manifest. High-level Bluetooth operations are done through the BluetoothAdapter instance, which is common to all apps on the system. You can get the instance through BluetoothManager‘s getAdapter() method.

Scanning peripherals requires that you have implemented the callback interface for getting scan results. In case you need to support API levels 18 to 20, call BluetoothAdapter‘s startLeScan() and supply an instance of BluetoothAdapter.LeScanCallback implementation. For API levels 21 onward, first call BluetoothAdapter's getBluetoothLeScanner() to get an instance of BluetoothLeScanner, and then call startScan() on the instance, supplying a ScanCallback where you can handle scan results. Also note that to get scan results on Lollipop (5.0) and newer you will need to declare ACCESS_COARSE_LOCATION or ACCESS_FINE_LOCATION permission in the manifest.

In your ScanCallback‘s onScanResult() you can get the peripheral from the supplied ScanResult by calling getDevice(), which returns a BluetoothDevice object. To initiate connection, call connectGatt() on the instance, giving a BluetoothGattCallback instance as argument.

Your BluetoothGattCallback‘s onConnectionStateChanged() is called, as the name implies, when the connected to or disconnected from the peripheral. Once you’ve established connection, call discoverServices() on the supplied BluetoothGatt instance. This will result in a call to onServicesDiscovered(), where you can set up read and write characteristics as needed.

You can get a BluetoothGattService instance by calling getService() on the supplied BluetoothGatt object. To register to notifications when a characteristic’s value changes on the peripheral, call setCharacteristicNotification() with true as argument on the BluetoothGattCharacteristic object you get by calling getCharacteristic() on the service instance. An important thing is to also remember to enable notifications on the Client Characteristic Configuration descriptor by calling setValue(BluetoothGattDescriptor.ENABLE_NOTIFICATION_VALUE) on the descriptor instance. You can get the descriptor by calling getDescriptor() on the BluetoothGattCharacteristic object, supplying the UUID of the descriptor (whose Bluetooth SIG assigned number is 0x2902. Incidentally, on iOS this is done automatically for you.)

When a characteristic’s value is changed on the peripheral and you’ve registered to get notifications for the characteristic, your BluetoothGattCallback‘s onCharacteristicChanged() is called by the framework, and you can then get the current value by calling getValue() on the supplied BluetoothGattCharacteristic object.

Writing to a writable characteristic is done by first setting the required data to the BluetoothGattCharacteristic object using one of the setValue() overloads, and then calling BluetoothLeGatt‘s writeCharacteristic() to send the data over to the peripheral. You can get results of the write operation to BluetoothGattCallback‘s onCharacteristicWrite() method if you’re interested in them.

So, there you have it. A few relevant links for more information:

May 08

Interesting.

func doStuff(arr: Array) {
    var value: UInt8!// = 0
    let data = NSData(bytes: arr, length: arr.count * sizeof(UInt8))
    data.getBytes(&value, range: NSMakeRange(0, 1))
    println(value)
}

let arr: [UInt8] = [ 0x00, 0x34, 0x56, 0xFF ]
doStuff(arr)

value will be nil if 0x00 is read from NSData. In case value is not declared as Optional, 0 is read instead. Just something to keep in mind.

Apr 22

1. Build the project, e.g. gradlew build
2. Go to the directory you want to place the headers (e.g. src/main), and execute:

javah -d jni -classpath ~/Library/Android/sdk/platforms/android-22/android.jar:../../build/intermediates/classes/debug {fully.qualified.class.name}

This will place the auto-generated headers in the src/main/jni directory. Change the platform version in front of android.jar as appropriate.

Apr 16

I’m currently in the process of making the landscape support in LoanShark more robust. When in portrait mode, the main activity displays a list of loans, while the view loan activity, opened by selecting an item from the list, displays the selected loans’ details. In landscape orientation both views are visible. These are all implemented using fragments, i.e. the activities themselves only provide the layout to glue the fragments together.

When holding the device in portrait mode, the loan view displays loan details using the same fragment class as is used on the right-hand side in landscape orientation. When the user is viewing a loan in portrait (view loan activity), and rotates the device to landscape (to main activity), the same loan is displayed.

The problem I’ve come across is that since fragment objects belong to a single Activity (stored in the Activity object’s own FragmentManager instance), there’s no way to reuse the fragment that’s seen in portrait mode of the view loan activity on the right-hand side of the landscape layout in the main activity. The reason is that when the user rotates from portrait to landscape, the activity hosting the loan details is destroyed, which in turn destroys the fragment as well.

To keep my fragment implementations modular (to break direct dependencies between them), I’ve utilized callbacks to the hosting activity class when loaders have finished their jobs. This way I’m able to know e.g. when a loan’s details have been finished so that the fragment can be created.

My initial implementation (and the one being used in version 2.2.0) is that the old fragment that was created when in landscape mode is displayed briefly before being replaced with the loan that was selected in portrait mode. This is clearly suboptimal because there’s a clear visual anomaly, however brief, when rotating the device. I considered patching the problem by hiding the existing fragment in landscape,  thus preventing the old fragment from being shown before fading in the fragment when then the new loan would have been loaded. That wouldn’t have been a solution I would’ve been satisfied with, so clearly more thinking was needed.

When a loan has been loaded in portrait mode and is about to be displayed in the view loan activity, I store the loan data in the main activity as well (and make sure it’s saved in onSaveInstanceState() in case the activity is destroyed while viewing the loan data). Then, if the user rotates the device to landscape, I check if the view loan fragment already exists (in case the device has been in landscape mode previously, it does). In such a case, I set the selected loan’s data to the existing fragment. When the activity’s onCreate() finishes, the fragment’s onCreateView() is called, and it’s there that the loan data can then be reused.

Nothing of this sort is mentioned in documentation concerning fragments. Also, it’s quite difficult to find solutions to fragment problems by googling them; all you seem to get is countless beginner tutorials on how to use fragments. I came across this solution after spending a couple of nights figuring out how the fragment management works internally, and from there I had an idea how to go on about solving the problem.

Mar 21

Putting this here for reference:

export PS1="\t \w\e[1;36m\$(__git_ps1) \e[0m\$ "
Dec 26

Last week I bought one of the new Galaxy Nexus phones that had just become available here in Finland. After using it for a while I realized that the screen sported, for some reason, a yellow tint. Having used my Galaxy S for about 18 months now, the difference was quite a stark one, even to my eyes. I snapped a photo of the Nexus next to my Galaxy S, both at full brightness.

I find it hard to believe that Super AMOLED technology would have gotten worse in the last year and a half. Granted, the resolution is significantly larger (480×800 vs 720×1280), but the difference in quality (pixel density notwithstanding) was too tremendous for me to bear.

I posted the above photo to a couple of websites I frequent, and sought for opinions. I had, by that time, already arranged for my Nexus to be RMA’d, so what I was looking for were reassurances that I would, with high probability, get a better device as replacement. I didn’t get many replies, but the ones I did indicated that I might in fact have had a defective screen. I then proceeded to pack the device up and sent it back to the store.

Since it’s Christmas, it’ll take a few days more to make the trip to the store (which is in Sweden), and for the new unit to be shipped back to me. I could, in fact, pick another device from my local store, compare the screens, and then sell off the one I like less. Nexus being sold for 650 euros at my local store, this plan might prove too expensive in the end.

I have the tracking code for the package I sent back to the store, so I can start asking questions about the replacement (in case they don’t contact me first) once they receive the shipment.

In all honesty, I don’t have high hopes for the screen of the replacement unit either. I’ve got the impression that the Nexus screen is supposed to be somewhat yellow, and also feature a blue tint when looking at it at an angle. Still, having used Galaxy S for 18 months I had expected something at least as good. My initial unit seemed clearly a step backwards what comes to the screen quality.

In all other respects, Galaxy Nexus didn’t disappoint. I was actually a bit uneasy when the specifications were released, because I had expected a 1.4 GHz dual-core processor paired with the Mali-400 GPU at least. That would have made it a bit better than Galaxy S II. It is obvious, however, that Samsung wants to keep the Galaxy S line as the pride and joy of the company. It doesn’t take a lot to figure out that The Galaxy S III, presumably available in April-May 2012, will blow Nexus out of the water. Nexus will probably sell about 1-2 million units in its lifetime, while Samsung is likely expecting the S3 to sell 10+ million units before the end of year.

The most important thing for me in buying the Nexus phone was vanilla Android 4.0. When I got the phone I found it difficult to put down. It wasn’t primarily about the hardware but the latest version of Android, which was truly a joy to use. Certainly it was a different feeling from using Eclair, which was my initial foray into the world of Android.

The Nexus devices are supposed to get updates from Google/Samsung (depending on build) quite soon after they are released, which, as a developer, was a major point to consider. I want to play with the latest and greatest as soon as it becomes available, instead of having to wait for months until the manufacturer releases their newest atrocity of a skin on top of the OS. There are ROMs, of course, but there’s really no beating the original, vanilla experience — to use the device just as Google intended.

Edit — My replacement unit arrived yesterday (10 Jan). It’s a different device all right (based on IMEI), but I can’t tell any difference from the unit I sent back. It seems that people who got a “flawless” screen either don’t know what they’re talking about, or just got very lucky. I think my Galaxy Nexus is just as good as it’s supposed to be. Were the screen any better, it would be a manufacturing fluke.

I’ve used the teamhacksung ICS port on my Galaxy S in the meantime, and I have to say that it rocks. Unfortunately, it has also made the woo factor of ICS wear off. When I got my Nexus replacement, it didn’t feel at all like the first time I got my hands on the device. Now it’s just the best phone I’ve ever used, although not significantly so.

Jun 17

Samsung published a press release a couple of days ago, citing that the Galaxy Tab 10.1 will be released in August in the Nordic countries (including Finland). I’ve been waiting for the device to be available since it was announced at CES 2011 in January. It is released in the US this week (nationwide), and I hoped that European release would be just around the corner. But no, I still have to wait 2-3 months more in addition to the 5 months I’ve already waited. Enough is enough.

My plan was to get into Honeycomb development by the time I leave off to summer holidays (4 weeks in July), so obviously the Galaxy Tab is a bust for me in that respect. Yesterday I ordered a WiFi-only Motorola Xoom from Amazon.co.uk for 527 euros. I may switch to Galaxy Tab 10.1 once they are finally released and available in Finland (or the UK) as well, but that also depends on how satisfied I am with the Xoom, and how much money I would receive back when I would sell it.

Apparently the Honeycomb 3.1 update is not out for WiFi-only Xoom yet, but hopefully it will be released by the time I receive my device.

Feb 20

So, on the 11th of February 2011 Nokia announced that they’re going to ditch Symbian and Meego in favor of the Windows Phone platform. This has sent tremors across the Finnish IT-industry which has been heavily dependent on Nokia. What this means is that potentially thousands of people working directly or indirectly on Symbian and/or Meego will be forced to find new job opportunities. Of course, the change won’t happen overnight but it is obvious that less and less money will be diverted towards Nokia’s own platforms in the coming months.

The organic growth of the Finnish outsourcing companies (which is where I work) has been heavily dependent on Nokia, and chances are Nokia’s decision will hit them hard. Nokia has decided that there will be minimal customizations to the Nokia Windows Phone devices, which means that there will be minimal amount of work to be done to customize the phones. Obviously, lots and lots of people will be moving on from Nokia projects to pastures new.

At the moment no one knows how things will proceed from here. It is certain that there will be layoffs and lots of them, since there is just not going to be enough work for everyone. This doesn’t concern just engineers, but also management, IT-support, HR-personnel and others who have worked more or less directly with Symbian and/or Meego.

For what it’s worth, I’ve been working with Symbian for six years and it certainly has taken its toll. I have no sympathy for its fate. I haven’t seen Meego so I can’t say much about it, other than what I’ve read from the press.

Obviously this is a time when one must be very alert to what the future will bring. Since Symbian is a dying breed, there’s no reason to invest in it anymore. Meego and Qt also received heavy blows with Nokia’s announcement, so their future (both immediate and long-term) look uncertain. Windows Phone is a big question mark, even with Nokia’s backing; the platform has been on the market for a few months and it has generated just a small amount of buzz.

Personally, I see two opportunities in my future: Android and iOS. Both are hot commodities at the moment, especially with Android’s rocket-like surge in popularity since its release by Google in 2008. Development on iOS is done using Objective-C, while on Android, Java is the language of choice. The most natural transition for me is of course Android, with Java being closely related to C++. Java is of course an “easier” language than C++, enabling faster pace of development, although it also has its own characteristics that have to be learned in order to be able to use the language effectively. With Android’s Native Development Kit (NDK) it is also possible to write C++ code when speed or closer access to hardware is of essential importance.

In fact, I decided to start investing in Android already last summer, when I bought the Samsung Galaxy S to replace my iPhone with a view to learning Android development. During last autumn I started to have a look at Android and did some small-scale experiments. A few weeks ago I got an idea of an application that I started to write with my then-acquired skills. It has taken a few intense weeks of my free time, I’ve learnt tremendously, and I’m proud to say that my first Market-worthy application, LoanShark, is ready.

preload preload preload