Oct 26

GreenRobot’s EventBus has a number of benefits compared to local broadcasts. Like broadcasts, events are a tool for decoupling your application architecture, and they’re especially useful when handling responses to asynchronous requests. EventBus makes it possible to receive results on specific threads and is also easier on the keyboard because it requires you to write less boilerplate code.

Note that discussion here only concerns broadcasts sent using LocalBroadcastManager, i.e. broadcasts that are sent and received only within the same process. System-wide broadcasts sent using Context.sendBroadcast() are not covered.

There’s an accompanying example app on GitHub. The app explores only the performance of EventBus in relation to local broadcasts when sending multiple events/broadcasts in succession (i.e. in a for-loop). For single events/broadcasts sent only sporadically, performance of the event/broadcast delivery itself is a non-issue. Furthermore, you can usually rearrange your event delivery to happen in bigger batches if performance, CPU and battery usage is an issue (which is basically always :) Therefore, you should not base your decision whether to use the EventBus library on the example code alone.

Event delivery on specific threads

You can select whether to receive EventBus events on different threads based on the threadMode setting you (optionally) specify in the @Subscribe annotation, as described in the EventBus documentation). Make sure you check out the documentation, since threadMode behavior may not be immediately obvious based on the name alone. Here are some scenarios where you might want to choose a specific threadMode over others:

  • @Subscribe(threadMode = ThreadMode.POSTING) The only mode that’s always guaranteed to be synchronous. You may want to use this e.g. to update the UI if you know you’re already on the main thread, or continue some background work on a worker thread. This is the default setting if you don’t specify the mode at all.
  • @Subscribe(threadMode = ThreadMode.MAIN) You can use this to update the UI when work completes on a background thread.
  • @Subscribe(threadMode = ThreadMode.BACKGROUND) For handling events on a dedicated worker thread one-by-one. When posting subsequent events from outside the worker thread while work is still ongoing, further requests will be queued. Note that if you’re already on the worker thread, any further events posted using this mode will be posted synchronously. If you need to post an event asynchronously from the event handler, consider using the ASYNC mode instead.
  • @Subscribe(threadMode = ThreadMode.ASYNC) This option is useful when you need to handle multiple callbacks concurrently, like when executing multiple network requests. The calling thread is from a separate ThreadPool of worker threads, and never the posting thread. Remember to be careful with this in your event handler, since multiple threads may be executing the code asynchronously.

By contrast, BroadcastReceiver’s onReceive() is always called on the process’ main thread, unless you call sendBroadcastSync() from some other thread which, as the named implies, is executed directly.

Callback registration

To register callbacks that you want EventBus to call when certain types of event objects are posted, you specify the posted object type as parameter to your callback methods. You also need to register the object of the class where your callbacks are implemented. This is roughly equivalent to registering a broadcast receiver with a set of intent filters, as shown in this example:

public class MyActivity extends Activity {  // ...

    private MyReceiver myReceiver = new MyReceiver();
    private static final BROADCAST_ACTION = "BROADCAST_ACTION";

    @Override public void onStart() {
        super.onStart();
        LocalBroadcastManager.getInstance(this)
            .registerReceiver(myReceiver, new IntentFilter(BROADCAST_ACTION));
        EventBus.getDefault().register(this);
    }
    
    @Override public void onStop() {
        EventBus.getDefault().unregister(this);                  
        LocalBroadcastManager.getInstance(this).unregisterReceiver(myReceiver);
        super.onStop();
    }

    @Subscribe public void onEvent(MyEvent event) {
        // Do stuff with received event ...
    }

    private static class MyReceiver extends BroadcastReceiver {
        @Override public void onReceive(Context context, Intent intent) {
            // Do stuff with received broadcast ...
        }
    }
}

As you can see, writing a broadcast receiver is somewhat more cumbersome since you need to extend the BroadcastReceiver class and supply an intent filter, whereas you can just annotate your event handling methods with @Subscribe when using EventBus.

Posting events and broadcasts

This is an area where LocalBroadcastManager and EventBus are very similar to each other, without much boilerplate even with LocalBroadcastManager:

public void postEvent() {
    MyEvent event = new MyEvent();
    EventBus.getDefault().post(event);
}

public void sendBroadcast() {
    Intent intent = new Intent("my_action");
    LocalBroadcastManager.getInstance(this).sendBroadcast(intent);
}

Delivering content

With broadcasts, you need to package your payload using intent extras, which can often require quite a bit of boilerplate code, especially when using Parcelables. With EventBus you can place your content within the event object itself. This requires no cumbersome packaging, no parcelables, etc. The content of the event you post is delivered as-is to the callback handler. Let’s expand the methods in the previous section to supply some payload as well:

class MyObject {
    public void doStuff() {
        // ...
    }
}

class MyEvent {
    int value;
    String text;
    MyObject myObject;

    public MyEvent(int value, String text, MyObject myObject) {
        this.value = value;
        this.text = text;
        this.myObject = myObject;
    }
}

public void postEvent() {
    MyEvent event = new MyEvent(123, "abcdef", new MyObject());
    EventBus.getDefault().post(event);
}

public void sendBroadcast() {
    Intent intent = new Intent("my_action");
    intent.putExtra("int_extra", 123);
    intent.putExtra("text_extra", "abcdef);
    intent.putExtra("myobject_extra", new MyObject());
    LocalBroadcastManager.getInstance(this).sendBroadcast(intent);
}

Passing the example instance of MyObject to a broadcast intent is not as simple as it looks, since the MyObject class needs to implement the Parcelable interface as specified in Android documentation. (Note that you should really use constant definitions for names of extras, they’re literals above only for conciseness and clarity.)

With EventBus, you can use the posted event directly in your callback handler, whereas with broadcasts you first need to unpack the payload from extras in your broadcast receiver:

@Subscribe
public void onEvent(MyEvent event) {
    Log.d(TAG, "value: " + event.value);
    Log.d(TAG, "text: " + event.text);
    Log.d(TAG, "myObject.doStuff(): " + event.myObject.doStuff());
}

// ...

private static class MyReceiver extends BroadcastReceiver {

@Override
public void onReceive(Context context, Intent intent) {
    int value = intent.getIntExtra("int_extra", -1);
    String text = intent.getStringExtra("text_extra");
    MyObject myObject = (MyObject) intent.getParcelable("myobject_extra");
    Log.d(TAG, "value: " + value);
    Log.d(TAG, "text: " + text);
    Log.d(TAG, "myObject.doStuff(): " + myObject.doStuff());
}

Conclusion

EventBus is a great library for decoupling your application architecture, delivering results on specific threads, and having to write no unnecessary boilerplate code when handling event payloads. While very convenient compared to simple local broadcasts, EventBus’ tradeoff comes in the lack of performance, although it’s still quite performant for most tasks. All things considered, for most intents and purposes EventBus is a very good choice due to its versatility.

This article was also published at my employer’s blog at https://bitfactor.fi/en/2016/11/01/comparison-of-eventbus-and-androids-local-broadcasts/

Apr 24

Perhaps not broken, but documented inadequately. In http://developer.android.com/training/implementing-navigation/temporal.html there’s a simple guide on how to set up back stack navigation when opening an app e.g. from a notification using a “deep link” to an activity that is not the topmost one in the app, so that the user can navigate upwards in the stack by pressing the back key.

Having wasted a couple of hours trying to make this work in LoanShark, I finally noticed a warning in LogCat:

W/ActivityManager﹕ Permission Denial: starting Intent [...] not exported from uid 10131

What the documentation forgot to mention is that you need to set the activity you want to jump to from e.g. a notification as exported. I added the attribute and encountered no more problems. The weird thing is that it did actually sometimes work even without the exported attribute… strange indeed. But now it works 100% of the time, fortunately.

Apr 03

TextView with italics text gets cropped at the rightmost edge when the view is set to wrap_content. Reportedly this affects when the content is set to have gravity=”right” as well.

http://stackoverflow.com/q/4353836

I haven’t plunged into the source code on this one, but it seems that Android’s layout engine evaluates the TextView’s width based on the text baseline, and does not take into account the fact that skewing the text may result in the projected baseline at the top (whatever it could be called) may not be the same. The solution is to simply add a space character at the end of the string.

I ran into this problem when designing the card layout for LoanShark loans. I wonder how many applications’ layouts would break if there was to be an actual fix to this problem…

Dec 06

If you’ve been dealing with Android fragments, odds are you’ve used the FragmentTransaction.addToBackStack() method. Often this method is called with a null argument, and that’s fine for most uses. Afterwards, you can pop the top of the back stack by calling FragmentManager.popBackStack() and you’re done.

You might be wondering what use is the String parameter in the addToBackStack() method. It can be quite useful in cases where you need to pop a number of items from the FragmentManager back stack at once. For example, you may want to wipe the slate clean of all fragments that you’ve been adding/replacing. You can do this as follows:

    FragmentManager fragMan = getFragmentManager();
    for (int i = 0; i < fragMan.getBackStackEntryCount(); ++i) {
        fragMan.popBackStack();
    }

Now, in case you want to leave one or more items on the back stack instead of popping the whole lot, you could try to keep track of the number of back stack items to pop, but this is tedious and error-prone to say the least. Here’s where addToBackStack()’s String argument comes to play. The String argument is the name of a back stack “state” you want to keep track of.

Let’s say that you have a two-pane layout, and on the right side you have a fragment that you’ve replace()d to a layout element. You’d like to keep this fragment visible at all times. You then proceed to utilize other fragments as well in the same layout spot, calling FragmentTransaction.replace() in the process, along with addToBackStack(). If you then want to pop all other back stack items but the first one, you can do this by adding a predefined state name to the first fragment’s addToBackStack() method:

    private static final String INITIAL_FRAGMENT_STATE = "initial_fragment_state";
    // ...
    FragmentTransaction ft = getFragmentManager().beginTransaction();
    ft.replace(...);
    ft.addToBackStack(INITIAL_FRAGMENT_STATE);

Then, when you need to wipe the slate clean and lose all but the initial state from the back stack:

    getFragmentManager().popBackStack(INITIAL_FRAGMENT_STATE);

And there you have it. No need to keep track of the number of fragments in the back stack in each valid state of the application or any such nonsense. Just make use of named back stack states.

Nov 06

LoanShark_1

A while back I released a new, completely-rewritten version of LoanShark. The current version is 2.0.6 which contains a number of

bugfixes, and I consider it to be quite stable. I think writing this version took considerably more effort than the first one, or maybe I have just forgotten how much effort went into the first attempt.

There were problems with database migration code from the previous version. I’ve done away with some tables altogether to make things simpler, which necessitated writing quite a bit of migration code. Migration is both engaging and frustrating at the same time. On one side there’s the need to evaluate the old and new database structures exactly and trying to figure out the simplest way to transfer old data to the new format. On the other side there’s no way around writing migration code unless you want to make your app crash on your users’ faces when they update to the latest version.

I thought I had ironed out the potential migration issues before releasing the first update, but there were several bug reports from users. I guess most of the users don’t bother to send an error report when an app crashes on startup; they’ll just uninstall it and be on their merry way. Fortunately it also seems that they don’t even bother to write those nasty 1-star reviews either.

During last week I was contacted by two users who had crashing problems, one of which wasn’t related to database migration. I’m very thankful for all feedback, even if it’s crash reports. It gives me the extra kick in the backside that keeps me motivated to continue development.

Oct 20

Sometimes it’s convenient to direct the output of a command to both the screen and a file. Here’s how to do it:

command 2>&1 | tee output.log

The tee command copies stdin to stdout, and makes a copy of the input to a file. The 2>&1 part also directs the stderr output, in addition to stdout. You can use tee -a to append to the log instead of overwriting.

May 23

Searching for elements in an STL container is a common task. There are several ways to accomplish this. Let’s have a look at some of them.

Suppose we have a collection of elements in an STL container. The element type is defined as a simplified phonebook entry, as follows:

struct Entry
{
    Entry( const string& n, const string& p ) : name( n ), phoneNum( p ) {}

    std::string name;
    std::string phoneNum;
};

We then push some entries into a container, e.g. a list, like follows:

std::list<Entry> phonebook;

phonebook.push_back( Entry( "James Bond", "123" ) );
phonebook.push_back( Entry( "Felix Leiter", "456" ) );
phonebook.push_back( Entry( "Vesper Lynd", "789" ) );

We’d then like to look up the phone number of a contact. We know the contact’s name, so we can search it as follows using a for loop and an iterator:

std::string searchFor = "James Bond";
std::list<Entry>::const_iterator iter = phonebook.begin();

for ( ; iter != phonebook.end(); ++iter )
{
    if ( iter->name == searchFor )
    {
        break;  // No need to look any further
    }
}

if ( iter != phonebook.end() )
{
    std::cout << "Call " << searchFor << " at " << iter->phoneNum << endl;
}

Quite simple. We could also make this into a function, and call it whenever we need to seach for a number:

string GetNumber( const std::list<Entry>& phonebook, const string& searchFor )
{
    std::list<Entry>::const_iterator iter = phonebook.begin();

    for ( ; iter != phonebook.end(); ++iter )
    {
        if ( iter->name == searchFor )
        {
            break;  // No need to look any further
        }
    }

    return iter->phoneNum;
}

There is also a more elegant way; we can write a comparison operator that returns true if the given Entry matches a specified std::string, and use it with the STL's find() algorithm. For the purposes of this example, it doesn't matter if the operator is a member operator or a global one. Here's the global version:

bool operator==( const Entry& e, const string& n )
{
    if ( e.name == n )
    {
        return true;
    }

    return false;
}

Make sure you have the parameters so that the container element type is on the left side, and the comparison argument (i.e. the third parameter to find()) is on the right side; this is what the algorithm expects.

Armed with this function, we are able to use the find() algorithm (remember to #include <algorithm>) to do our bidding:

std::string searchFor = "Vesper Lynd";

std::list<Entry>::const_iterator iter =
    std::find( phonebook.begin(), phonebook.end(), searchFor );

if ( iter != phonebook.end() )
{
    std::cout << "Call " << searchFor << " at " << iter->phoneNum << endl;
}

The find() algorithm compares each element of the specified range (between begin() and end(), as specified above) against the third argument, which happens to be the search string. Given that we now have a comparison operator taking an Entry struct and an std::string defined, the compiler calls the operator for each element in the range. The find() terminates either when the comparison returns true, or when the iterator position specified as the second argument to the algorithm is reached, i.e. phonebook.end().

You can also use find_if() with a function object as a search criterion. Function objects have also much more powerful features than mere functions, e.g. the ability to maintain state between invocations, but for this example we'll use one just as a simple comparison critetion. Let's have a look:

class CompPred : public binary_function<Entry, std::string, bool>
{
public:
    bool operator()( const Entry& e, const std::string& s ) const
    {
        if ( e.name == s )
        {
            return true;
        }

        return false;
    }
};

Here we derive a class from binary_function, and add a function call operator (operator()). The template parameters to the base class are two of the argument types (here Entry and std::string), and a return type (bool). What we're doing here is basically a template specialization of the binary_function class template. We then use the Entry and std::string types as parameters for the function call operator, akin to what we did with operator== above.

We can then do the search using a slightly modified version of the previously presented code:

std::string searchFor = "Vesper Lynd";

std::list::const_iterator iter =
    std::find_if( phonebook.begin(),
                  phonebook.end(),
                  bind2nd( CompPred(), searchFor ) );

if ( iter != phonebook.end() )
{
    std::cout << "Call " << searchFor << " at " << iter->phoneNum << endl;
}

Note the use of the find_if() algorithm. It takes as the third argument a pointer to a function or function object derived from unary_function. The bind2nd() adapter takes as parameters a function object and the search argument. We need the adapter in order to convert a binary function (taking two arguments) into a unary function (taking a single argument).

This post has droned on for quite some time, so I'll cut this short. There are loads of stuff I deliberately left unexplained here, e.g. why the function object's operator() must be declared const, function objects and function adapters, and template specializations, but I'll save those for another time, my dear imaginary readers.

Feb 23

The fail fast idiom is all about catching software errors at the earliest possible stage. The idiom is surprisingly often just neglected, as if developers loved to spend days tracking down obscure bugs which they know would just as well have been caught with a tiny bit of extra work in the first place.

Failing fast doesn’t mean that the number of failures increases. Rather, it means that the failures are not neglected and ignored, but found as soon as possible. For example, suppose that we call a function that returns a value, which in our subsequent code must then be below a certain maximum value:

int value = p->FunctionCall();
if ( value > max )
{
    value = max;
}

DoStuff( value );

Here, if the returned value is greater than max, it is set to max. DoStuff() requires that the value is less or equal to max. Now, this clamping may or may not be the intended behaviour. If setting the value to max is just a fail-safe in order to prevent DoStuff() from crashing if p->FunctionCall() returns an erroneous value, it is better to make sure already at this point that the value is indeed within specified bounds by adding an assertion.

Assertions are one of the foresighted developer’s best friends. There are a multitude of ways in which assertions are used, for example in Symbian code there are e.g. __ASSERT_DEBUG() and __ASSERT_ALWAYS() macros, for which you can provide a function to be called if the assertion fails. Then there’s the good old assert() macro from the standard C library, which is defined to write to the standard error output (prior to calling abort()) at least the asserted expression and the file name and line number where the assertion failed.

Where to put assertions, then? Usually there are quite a few places in code where a certain invariant must hold true. For example, there may be a member variable which must have a certain value upon function invocation, or else the results of the function are undefined. As a pseudocode example, imagine that we’re supposed to sell toys, and all their bells and whistles must be painted before they’re ready for shipment.

void Toy::Paint()
{
    for_each( bells_.begin(), bells_.end(), PaintFunc );
    for_each( whistles_.begin(), whistles_.end(), PaintFunc );
}

Let’s say that we add a new Toy to a vector container to be shipped to a retailer but forget to call its Paint() function. We now have a bunch of toys but, unbeknownst to us due to our negligence, one of them is missing paint from its bells and whistles. Uh oh, the delivery truck is here, and those guys don’t have all day. Better call the Offload() function to make some space in the warehouse:

void WareHouse::Offload( const std::vector<Toy>& toys )
{
    for_each( toys.begin(); toys.end(); ShipFunc );
}

Now we’re done with the toys, but one has slipped through without all the invariants checked. A good solution here would be to check in the Offload() function that we’re shipping out kosher items. Supposing that bells and whistles derive from a common class, Part, and that the base class contains a IsPainted() function to check if the part in question has been painted, we can add an assert to the Offload() function as follows:

void WareHouse::Offload( const std::vector<Toy>& toys )
{
    for ( std::vector<Toy>::const_iterator<Toy> iter = toys.begin();
           iter != toys.end(); ++iter )
    {
        assert( iter->IsPainted() );
    }

    for_each( toys.begin(); toys.end(); ShipFunc );
}

Here the assert will stop program execution to prevent shipping of unpainted toys. What we’re trying to achieve here is that already during development phase we are able to deal with code paths that result in unpainted toys being shipped. It’s better to make the problems manifest themselves as early as possible (to fail fast) than trying to find obscure bugs during the final, often hectic stages of an imminent release.

Feb 23

Consider the following class:

class Gadget
{
public:
    void MakeNewWidget()
    {
        delete widget_;
        widget_ = NULL;
        widget_ = new Widget;
    }

    const Widget& Get() { return *widget_; }

private:
    Widget* widget_;
};

In the MakeNewWidget() function the widget_ class member is first deleted and set to NULL before creating a new Widget object and assigning it to widget_. Plain and simple, and you see a lot of this kind of code around.

There’s a particular reasoning for this kind of code, especially in memory-contrived environments. The aim here is to prevent having a temporary Widget object in memory in addition to the widget_ member, thus saving on memory consumption.

But there’s something horribly wrong here. The “aim” of the code may be accurate, but it’s aiming straight at the coder’s foot. What would happen if the widget_ = new Widget; line threw an exception? The answer is that the Gadget object would be left in an inconsistent state, because the original widget_ member has already been deleted. If there’s a catch somewhere nearby down the call stack, the code there may well assume that the old state of the Gadget object is still valid; i.e. there’s been no change to its internals even though there was an exception.

We are able to remedy the situation by creating new Widget object into a temporary variable, deleting the original widget_ and assigning the temporary variable to widget_. Like follows:

void MakeNewWidget()
{
    Widget* temp = new Widget;
    delete widget_;
    widget = NULL;
}

Temporary variables such as these may not always be aesthetically pleasing, and we are using twice the amount of memory by briefly having two Widget objects present, but this code is significantly safer for the callers. No longer do the callers have to worry about the state of the Gadget object in case MakeNewWidget throws an exception, which is as it should be.

Aug 26

A C++ compiler implicitly creates a copy constructor and an assignment operator for any class that does not explicitly define them.

class Widget
{
public:
    Widget( Gadget* gadget )
    : gadget_( gadget ) {}

    ~Widget() { delete gadget_; }

private:
    Gagdet* gadget_;
};

 
Here, we have not explicitly defined the copy constructor or the assignment operator, so the compiler will add them, resulting a class like this:

class Widget
{
public:
    Widget( Gadget* gadget )
    : gadget_( gadget ) {}

    ~Widget() { delete gadget_; }

    Widget( const Widget& widget )
    : gadget_( widget.gadget_ ) {}

    Widget& operator=( const Widget& widget )
    {
        gadget_ = widget.gadget_;
        return *this;
    }

private:
    Gagdet* gadget_;
};

 
Consider the following code fragment that uses the Widget class:

int main()
{
    Gadget* gadget = new Gadget;
    Widget* w1 = new Widget( gadget );

    Widget w2 = *w1;    // Copy using the assignment operator
    Widget w3( *w1 );   // Copy using the copy constructor

    return 0;
}

 
The default copy constructor and assignment operator copy all member variables from an object to another as-is. The problem arises when an object has a pointer to some other object; gadget_ in this case. When copying or assigning an object using the automatically generated functions, only the pointer value is copied. Both the objects’ member variables now point to the same object. Now, when the original object is destroyed, it deletes the widget_ member object whose pointer it held. The result is that the copied object’s pointer now points to deleted memory, and all bets are off.

When you have classes that contain pointers to other objects, consider if it is ok to just copy the pointers using the automatically generated copy constructor and assignment operator. If the pointed-to object ownership is not within the objects being copied, memberwise copying using the default copy constructor and copy assignment is ok. But it the objects own the data behind the pointers, you’ll want to either disable the copy constructor and assignment operator (by making them protected/private), or make sure the object is actually cloned (or deep-copied) so that the new object is self-contained. This means that the pointers cannot point to the same objects but new ones instead, which are created upon cloning the object in the copy constructor or assignment operator.

preload preload preload