There’s More Than One Way to Skin a Singleton

After working with C++ for a while, we all come across situations where we need to create one and only one instance of a given object type. However, depending on your needs, there’s more than one way to go about implementing a singleton. All of the implementations will have certain characteristics in common. For instance, the class constructor will be private to prevent anyone from creating an instance of the class directly. (The destructor may need to be private as well, but we’ll discuss that below.) Instead, a static getInstance() method will provide access to the one and only class instance:

class MyObject
{
private:
    MyObject();

public:
    // We'll see below that the type returned by this method
    // may vary depending on how you implement the singleton.
    static MyObject& getInstance();
};

We’ll take a look at the “permanent” singleton, the “lazy” singleton, and the “disappearing” singleton. Though these descriptions aren’t official or even commonly adopted, they accurately describe the behavior of the singleton object.

The Permanent Singleton

The “permanent” singleton is exactly what its name implies. It’s created automatically when your application starts and it remains intact until your application terminates.

// Allocate the object on the heap and initialize it before
// the application begins execution.
static MyObject instance;
MyObject& MyObject::getInstance()
{
    return instance;
}

Creating a static instance of the class ensures the object is initialized and ready for use when the application starts. Because the object is instantiated before the application begins execution, this implementation is advantageous in multithreaded environments since there is no need to worry about thread contention trying to instantiate the object. However, we will need to ensure the object does not carry any initialization dependencies on other statically instantiated objects and that it can actually allocate any internal resources during its early initialization. (Read about Schwarz Counters for more information on handling initialization dependencies between static objects.)

Because the singleton is created when the application starts and can never be destroyed, it’s internal state is maintained throughout the lifetime of the application.

Most likely, we will want to make the class destructor private to prevent anyone from doing something like this:

MyObject& niceObject = MyObject::getInstance();
MyObject* evil_ptr = &niceObject;
delete evil_ptr;

Hopefully, no one would perform this exact series of steps, but if the object reference is ever converted to a pointer, it becomes possible to delete the object and future calls to getInstance() will return a bad pointer.

Advantages:

  • Avoid threading problems associated with instantiation.
  • Maintains object state throughout the lifetime of the application.

Disadvantages:

  • Resources are allocated permanently.

The Lazy Singleton

If your singleton object consumes significant resources or it may not be needed within every instance of your application, you may prefer to use the “lazy” singleton. We can avoid creating an instance of the class until it’s needed by the application.

MyObject& MyObject::getInstance()
{
    // Allocate space on the heap for the object instance. It will
    // be initialized the first time this method is called.
    static MyObject instance;
    return instance;
}

Or we might prefer to allocate memory for the object when it’s first needed:

MyObject* MyObject::getInstance()
{
    // Only allocate space on the heap for an instance pointer.
    static MyObject* instance = NULL;
    if (instance == NULL)
    {
        instance = new MyObject();
    }
    return instance;
}

However, waiting to either allocate or initialize the singleton instance means we also need to exercise some caution if the object is used in a multithreaded environment. The second implementation carries the risk of allocating multiple instances of the class. While the first implementation cannot allocate multiple instances, it can result in multiple calls to the class constructor. A locking mechanism will need to be placed around the declaration of the singleton object in the first instance and around the if block in the second.

Thinking about the class destructor, we still don’t want to make it public. We might think we could use the destructor to reset the instance pointer, allowing us to only consume any class resources when we needed the object and free them by destroying the object when we were done with it. However, there’s no way to guarantee we wouldn’t be deleting a pointer that was still being reference elsewhere in the application. To do that we would need some sort of reference counting on the object, which leads us to the next singleton implementation.

Advantages:

  • Avoids resource allocation until the object is needed.
  • Maintains object state throughout the lifetime of the application.

Disadvantages:

  • Once resources are allocated, they’re allocated permanently.
  • Not thread-safe without additional safeguards.

The Disappearing Singleton

Is there a way we can implement a singleton that’s guaranteed to only be around when we need it – a singleton that “disappears” when we don’t want it anymore? While standard C++ doesn’t provide us with a way to do this, we can use the popular Boost libraries to help us out.

shared_ptr<MyObject> MyObject::getInstance()
{
    static weak_ptr<MyObject> instance;

    shared_ptr<MyObject> ptr = instance.lock();
    if (!ptr)
    {
        ptr.reset(new MyObject());
        instance = weak_ptr<MyObject>(ptr);
    }
    return ptr;
}

The boost::weak_ptr<> and boost::shared_ptr<> templates allow us to maintain a reference count on our singleton object. When a shared_ptr is allocated, it increments the reference count on an object. When it goes out of scope, it decrements the reference count. When the reference count hits zero, the object is deleted. If we couple the shared_ptr with the weak_ptr, we have a way to track usage of our singleton object and free it when it’s no longer needed. A shared_ptr is created for our class instance when it’s first needed and the weak_ptr “follows” it around until it’s deleted. Trying to lock() the weak_ptr will return the singleton instance of our class, or it will return an empty shared_ptr, indicating we need to create a new singleton instance.

The destructor for our class can be public so it can be accessed by the Boost templates, but a better approach would be to keep it private and add the weak_ptr<MyObject> and shared_ptr<MyObject> classes as friends of our class. This will prevent anyone from doing something like this:

shared_ptr<MyObject> nice_ptr = MyObject::getInstance();
MyObject* evil_ptr = nice_ptr.get();
delete evil_ptr;

One caveat with the disappearing singleton is that it may not maintain state across calls to getInstance(). Our first two implementations maintained object state for everyone calling getInstance() because we never destroyed the singleton object. If the singleton needs to maintain any state information across the lifetime of the application, it will need some additional help.

Advantages:

  • Avoids resource allocation until the object is needed.
  • Frees resources when they are no longer needed.

Disadvantages:

  • Not thread-safe without additional safeguards.
  • Does not maintain object state.

Before jumping on any particular singleton implementation, it’s a good idea to consider how the singleton object will be used within your application. Is threading performance an issue? Is resource usage an issue? Is consistent internal state an issue? Any approach will have it’s own set of advantages and disadvantages that should be considered, and it is probably wise to document the reasoning for selecting a particular approach within the object’s getInstance() implementation.

const-antly Changing

In addition to the virtual and inline method modifiers, C++ also allows the addition of the const modifier on class methods. The const modifier indicates the method does not alter the state of class member variables.

class MyNumber
{
public:
    int getValue() const
    {
        return mValue;
    }

    void setValue(int inValue)
    {
        mValue = inValue;
    }

private:
    int mValue;
};

The implementation of getValue(), as long as it is declared const, cannot alter the value of mValue; the method simply has read-only access to all member variables. Declaring a method as const is helpful because it allows us to handle read-only instances of the MyNumber class:

void printMyNumber(const MyNumber& inNumber)
{
    cout << inNumber.getValue();

    // We can't call inNumber.setValue() because our reference
    // is to a const MyNumber object and setValue() is not a
    // const method.
}

...

MyNumber x;
x.setValue(14);
// We pass in a reference to a constant object, x.
printMyNumber(x);
// We know the value of x is still 14.

But what if we want to use a locking mechanism to make our class thread-safe?

class MyLock
{
public:
    void lock();
    void unlock();
};

class MyNumber
{
public:
    int getValue() const
    {
        mLock.lock();
        const int val = mValue;
        mLock.unlock();
        return val;
    }

    void setValue(int inValue)
    {
        mLock.lock();
        mValue = inValue;
        mLock.unlock();
    }

private:
    MyLock mLock;
    int mValue;
};

Can we do this? Notice the lock() method on MyLock isn’t const, presumably because the internal state of the MyLock object isn’t preserved during its execution (obviously the state of the lock changes). The compiler will report an error when it encounters mLock.lock() within a const method. We could remove the const modifier from getValue(), but then we can no longer pass around const references or pointers to MyNumber when we need to retrieve its value.

We now have a question to consider: What do we mean when we say a const method does not alter the internal state of an object? From within MyNumber, we can say getValue() no longer preserves the state of MyNumber because the bits inside mLock are changing. But from another perspective (outside of MyNumber), does anyone really care about the changing state of mLock? It’s an internal object that is invisible as far as any code using MyNumber is concerned. It’s an implementation detail that remains hidden from the outside world. So how can we enjoy the benefits of keeping getValue() as a const-modified method.

Fortunately, C++ provides a solution to this problem. One possibility would be recasting the this pointer to effectively ignore the const modifier:

int MyNumber::getValue() const
{
    const_cast<MyNumber*>(this)->mLock.lock();
    int val = mValue;
    const_cast<MyNumber*>(this)->mLock.unlock();
    return val;
}

However, this is pretty ugly. A better solution is to simply declare the mLock member variable as mutable:

class MyNumber
{
    ...
private:
    mutable MyLock mLock;
    int mValue;
};

The mutable keyword lets the compiler know the state of the member variable can be altered even from within a const method. Obviously, the mutable modifier could be misused – why not declare all methods as const and all member variables as mutable? The intent is to use mutable only when changes to the internal state of an object cannot be observed from outside the object. Thread-safe locks could be one example of such a situation. The mutable keyword might also be used to update internal statistics for the class – so you could keep track of how many times the getValue() method had been called. You might also use the mutable keyword to cache the result of a computationally expensive operation:

class MyNumber
{
public:
    double getSquareRootValue() const
    {
        // Track some statistics.
        mSqrtCount++;

        // Cache the square root if we don't already have it.
        if (mSqrtValue < 0)
        {
            mSqrtValue = sqrt(mValue);
        }

        return mSqrtValue;
    }

    void setValue(double inValue)
    {
        mValue = inValue;
        mSqrtValue = -1;
    }

private:
    double mValue;
    mutable double mSqrtValue;
    mutable int mSqrtCount;
};

The use of mutable allows us to maintain what can be referred to as “conceptual const-ness” (or “logical const-ness”) instead of “bitwise const-ness”. That is to say, the bits within an object may change, but the conceptual (or observable) state of the object remains constant.

When virtual Functions Won’t Fall inline

C++ offers two useful modifiers for class methods: virtual and inline. A virtual method can inherit new behavior from derived classes, while the inline modifier requests that the compiler place the content of the method “inline” wherever the method is invoked rather than having a single instance of the code that is called from multiple places. You can think of it as simply having the compiler expand the content of your method wherever you invoke the method. But how does the compiler handle these modifiers when they are used together?

First, a sample class.

class Number
{
protected:
    int mNumber;

public:
    Number(int inNumber)
        : mNumber(inNumber)
    {
    }

    inline int getNumberInline() const
    {
        return mNumber;
    }

    virtual inline int getNumber() const
    {
        return mNumber;
    }
};

We can create an evaluation function to exercise our class:

void evalNumber(const Number* num)
{
    int val = num->getNumber();
    int inl = num->getNumberInline();
    printf("val = %d, inl = %d", val, inl);
}

And we can call our evaluation function, providing an instance of the Number class:

    Number* num = new Number(5);
    evalNumber(num);
    delete num;

The disassembly of evalNumber looks like this (using Microsoft Visual C++ 2008):

void evalNumber(const Number* num)
{
00CE2010  push        ebp  
00CE2011  mov         ebp,esp 
00CE2013  sub         esp,8 
    int val = num->getNumber();
00CE2016  mov         eax,dword ptr [num] 
00CE2019  mov         edx,dword ptr [eax] 
00CE201B  mov         ecx,dword ptr [num] 
00CE201E  mov         eax,dword ptr [edx] 
00CE2020  call        eax  
00CE2022  mov         dword ptr [val],eax 
    int inl = num->getNumberInline();
00CE2025  mov         ecx,dword ptr [num] 
00CE2028  mov         edx,dword ptr [ecx+4] 
00CE202B  mov         dword ptr [inl],edx 
    printf("val = %d, inl = %d", val, inl);
00CE202E  mov         eax,dword ptr [inl] 
00CE2031  push        eax  
00CE2032  mov         ecx,dword ptr [val] 
00CE2035  push        ecx  
00CE2036  push        offset __load_config_used+48h (0CE49D0h) 
00CE203B  call        dword ptr [__imp__printf (0CE724Ch)] 
00CE2041  add         esp,0Ch 
}
00CE2044  mov         esp,ebp 
00CE2046  pop         ebp  
00CE2047  ret              

You’ll notice that when invoking the virtual inline method the compiler inserts a call to the method’s implementation while the inline version of our function is expanded in-place. So why didn’t our virtual inline method get expanded in-place as well?

The reason lies in how virtual methods work. When a C++ compiler encounters a virtual method, it typically creates a virtual method table (or v-table) for the class, containing pointers to each virtual method for the class. When an instance of the class is created, it contains a pointer to the v-table. Invoking the virtual method requires a look-up in the v-table to retrieve the address for the correct implementation of the method. Instances of derived classes are simply able to point to a different v-table to override behavior in a base class. Understanding how v-tables work, it should be apparent why the compiler couldn’t expand the virtual inline method in place. It’s possible num pointed to an object derived from Number and the implementation of getNumber() had been overridden. In this case, the compiler had to go through the v-table to ensure it invoked the correct method implementation.

So does virtual inline buy us anything? As it turns out, the compiler can take advantage of the inline declaration when it can determine with certainty the type of the object being referenced.

    Number num2(5);
    int dblVal = num2.getNumber();
    printf("dblVal = %d", dblVal);

When we reference num2 as a local variable, the compiler can determine from the context that we are referencing an instance of the Number class and not another class derived from Number. This allows the compiler to generate the following code:

    Number num2(5);
009220B5  mov         dword ptr [num2],offset NumberDoubler::`vftable' (924798h) 
009220BC  mov         dword ptr [ebp-8],5 
    int dblVal = num2.getNumber();
009220C3  mov         edx,dword ptr [ebp-8] 
009220C6  mov         dword ptr [dblVal],edx 
    printf("dblVal = %d", dblVal);
009220C9  mov         eax,dword ptr [dblVal] 
009220CC  push        eax  
009220CD  push        offset __load_config_used+5Ch (9249E4h) 
009220D2  call        dword ptr [__imp__printf (92724Ch)] 
009220D8  add         esp,8

You can see the code for getNumber() has been expanded in-place. It’s important to realize the compiler can only make this optimization because it knows the object’s type with certainty and, therefore, doesn’t need to go through the v-table to call the method. Instead the inline method can be expanded in-place.

More Than Resource Contention

One of the first things we learn about multithreaded programming is the need to guard against simultaneous read-write access to shared resources, but this isn’t always a simple matter.

Let’s consider a scenario where we have an existing C++ class that was written without concern for thread-safety:

class Book;

class Library
{
private:
    std::list<Book*> mBooks;

public:
    void addBook(Book* inBook)
    {
        mBooks.insert(mBooks.begin(), inBook);
    }

    Book* getBook(int inIndex)
    {
        return mBooks[inIndex];
    }

    int getBookCount()
    {
        return mBooks.size();
    }

    void removeBook(Book* inBook)
    {
        std::vector<Book*>::iterator iter = std::find(mBooks.begin(), mBooks.end(), inBook);
        if (iter != mBooks.end())
        {
            mBooks.erase(iter);
        }
    }
};

Running through all the books in the library might look like this:

void processBooks(Library* inLibrary)
{
    int count = inLibrary->getBookCount();
    for (int index = 0; index < count; ++index)
    {
        Book* book = inLibrary->getBook(index);
        // do something with the book
    }
}

Now, suppose we want to make the Library class thread-safe. Since the std::vector template isn’t thread-safe, we’ll need to implement a locking mechanism to serialize access to mBooks. Otherwise, two threads might try to manipulate the vector at the same time with unexpected results. A quick implementation might be:

class Library
{
private:
    std::list<Book*> mBooks;
    Mutex mBooksLock;

public:
    void addBook(Book* inBook)
    {
        mBooksLock.lock();
        mBooks.insert(mBooks.begin(), inBook);
        mBooksLock.unlock();
    }

    Book* getBook(int inIndex)
    {
        mBooksLock.lock();
        Book* book = mBooks[inIndex];
        mBooksLock.unlock();
        return book;
    }

    int getBookCount()
    {
        mBooksLock.lock();
        int count = mBooks.size();
        mBooksLock.unlock();
        return count;
    }

    void removeBook(Book* inBook)
    {
        mBooksLock.lock();
        std::vector<Book*>::iterator iter = std::find(mBooks.begin(), mBooks.end(), inBook);
        if (iter != mBooks.end())
        {
            mBooks.erase(iter);
        }
        mBooksLock.unlock();
    }
};

Now it’s thread-safe, right?

Well, no.

This implementation will prevent concurrent access to mBooks, but consider this real-life parallel. While driving down the highway with a camera, we point the camera out the window and take a picture of the traffic to our right. A few minutes later, we need to switch into the right-hand lane, so we check the picture to make sure no cars are in the lane. This kind of driving suffers from the same problem as our class implementation – relying on state information that’s possibly out of date – and it’s likely our car and our software will face the same outcome.

Just like cars moving into or out of adjacent lanes, if a second thread decides to add or remove a book from the library while the first thread is looping through the books, we have a potential change in two pieces of state information used in our loop. Can you spot both of them?

First, the book count may change. It may increase or decrease as we run through the processing loop. We might modify the code a bit to better handle changes to the book count:

Book* Library::getBook(int inIndex)
{
    Book* book = NULL;
    mBooksLock.lock();
    // Make sure we have a valid index.
    if ((inIndex >= 0) && (inIndex < mBooks.size()))
    {
        book = mBooks[inIndex];
    }
    mBooksLock.unlock();
    return book;
}

void processBooks(Library* inLibrary)
{
    int index = 0;
    int count = inLibrary->getBookCount();
    do
    {
        Book* book = inLibrary->getBook(index);
        if (book != NULL)
        {
            // do something with the book
        }
        index++;
        count = inLibrary->getBookCount();
    } while (index < count);
}

However, we still have a second bit of changing state information: the current index value. Let’s assume our mBooks vector contains the books “A”, “B”, “C”, “D”, and “E”. If we process book “C” when the index is 2, then add a new book at the front of mBooks, when we process index 3 we’ll be processing book “C” again. Or if we remove the book at the front of the vector, we’ll skip processing on book “D” and go straight to book “E”.

You may think these problems should be obvious, but these kinds of mistakes can be all too common. Sometimes a developer manages to get away with a “safe enough” solution, but at some point the bugs will surface – most likely when another developer begins using the Library class in a slightly different manner or from yet another thread.

The primary purpose of this post is to point out that thread-safety is sometimes not as straight forward as we might think. Just scattering a few locks around class member variables isn’t sufficient. We need to be mindful of 1) how various threads may access shared resources, and 2) the lifetime of any state information we may be using (e.g. the book count and index value in our sample). In a later post, we’ll consider how we might implement the Library class to avoid threading problems.

Running Ice Cream Sandwich on the Nexus S

After ignoring the update notification for 7 to 10 days, I finally took the plunge and updated my Nexus S to ICS 4.0.3. After all, that’s part of why I wanted a pure Google phone – early access to OS updates.

I was a bit apprehensive after seeing that a lot of people had run into problems with ICS and it had been pulled by Google. But I figured there must be some way to go back if I ran into serious problems. Here’s a list of ten things I’ve experienced with ICS that stand out to me – some good, some not so good.

1. Constant Google+ crashes related to a Picasa Sync database.

I was regularly getting popups telling me Google+ had shutdown or was no longer responding. According to the crash data, the problem had something to do with a failure to upgrade a Picasa-related database from version 5 to version 4. That’s right – it seemed to be trying to downgrade the database schema.

After several days of trying to figure out the problem, I found a post online that explained it was a simple matter of updating Google+ from the Android Market. Sure enough, I browsed into the market, found an update to Google+, and my problems disappeared. For some reason, the update only appeared when I went to the listing for the app. The Market app wouldn’t recognize the need for an update simply by visiting the list of my installed apps even though other updates had shown up.

2. Left swipe to access the camera from the lock screen.

I think this was a great addition to the phone. I’m still getting used to doing the left swipe and find myself doing a right swipe and getting ready to touch the camera app I used to have pinned on my home screen. I end up needing to hit the power button to suspend the phone, hit again to wake it up, and then left swipe to get to the camera. I figure in time I’ll get it down.

3. Settings are accessible from the notification tray.

This was actually one of the items that got me to try the update to ICS. Having quick access to settings from anywhere is very helpful. Just swipe down and tap the icon.

4. Auto-rotate seems to have some problems.

I noticed a couple of times that rotating the phone wasn’t automatically rotating the current app even though I knew the app should support it. Another Google search shows this is a fairly common problem, but I haven’t found a solution for it yet.

[Update: Apparently, this problem can often be correctly by a simple reboot, but I’m still unsure about what causes the problem.]

5. No more silencing the phone from the lock screen.

It used to be a simple matter of swiping across the lock screen to toggle in and out of silent mode, but now that ability has gone away. You can access the notification tray from the lock screen and quickly access notifications without needing to unlock the phone, which can be an added convenience.

[UPDATE: I’ve discovered this functionality is still available by holding down the power button. The popup menu contains buttons for silent, vibrate, and normal modes.]

6. Easy screenshots!!

Holding the power button and the down volume button for a second will generate a screenshot. This is a terrific new feature that I’m glad to see.

7. Look at all those new contacts.

I was a bit surprised when I first opened up my contact list and saw some unfamiliar names. All of my Google+ and Twitter contacts had been pulled into my contact list alongside my GMail contacts. I suppose that might make sense for some people, but most of those contacts are not people I actually know. Fortunately, it’s pretty easy to restrict the list back to just my GMail contacts.

8. What happened to my wallpaper?

I’m not sure how it happened, but the default Gingerbread live wallpaper was replaced by a close up of green grass at some point during my use of ICS. I never attempted to change this and I’m still not sure how it happened, but one minute I had colorful stripes shooting across my background, I experienced some kind of momentary lock up of the screen, and I had new wallpaper. I actually like the green grass so I’ve kept it around, but it’s still a big mystery as to what happened.

9. Creating home screen folders is easy.

I never did anything with folders on Gingerbread, so I’m not sure if anything like this was possible, but it’s very nice being able to quickly create folders to organize apps on my home screen. Now Angry Birds only occupies one space instead of three.

10. Improved scrolling through apps.

I always liked the way your list of apps scrolled off into the distance at the top and bottom of the screen on Gingerbread. I knew that had gone away in ICS and I thought I’d miss it, but the new page scrolling is far better. It’s much quicker to scroll through apps to find the one you want going one page at a time rather than flicking the list and hoping you stop it at the right location.

Well, that’s my “top ten list” for ICS experiences. Perhaps I’ll update it as I continue to use the update. If you’re still wondering whether to upgrade or not, I suggest doing some searching online to see what others are experiencing. Then once you know what you might run into, go ahead and take the plunge.

Streaming Video – Even for Grandma

I recently spent some time at the local Micro Center shopping for some network equipment. It’s pretty interesting what you can overhear on shopping excursions like this…

Customer: Hi, I’m looking for a wireless router.

Associate: Well, we have this D-Link Wireless-N router for $29.99, but it only transmits at 150 Mbps. For $39.99 you can get the one that transmits at 300 Mbps.

Customer: Do I need the extra speed?

Associate: Well, the 150 Mbps is fine if you’re a grandma just sending e-mail or something, but if you want to do any kind of serious networking like video streaming or gaming, 150 Mbps isn’t going to cut it and you’re going to want the 300 Mbps.

Well, your grandma’s e-mail needs will probably be just fine with less than 1 Mbps transfer speeds, but this made me wonder… Are these associates really ignorant of the technology they’re selling and they simply repeat what they’ve been told, or do they actually know they’re misleading customers in order to sell higher priced products?

The higher throughput router should alleviate network congestion on a very busy home network, but it’s not going to make any real difference when you try to stream your next YouTube or Netflix video.  If you’re not sure this is the case, run a quick internet bandwidth benchmark from a computer you plan to connect to that wireless router. You will quickly find that your ISP doesn’t come anywhere close to 150 Mbps, much less 300 Mbps, when transferring data from the internet to your house. More likely, you will see transfer speeds in the 12-16 Mbps range. Your ISP is the bottleneck in your connection to the internet and regardless of what wireless networking equipment you’re using, you can’t do anything to your home network that will make your ISP provide you data at a faster rate. (Though you can sometimes pay them more money for a faster connection.) Unless you have an unusually large amount of internal traffic on your home network, the 150 Mbps and 300 Mbps routers should provide the same experience with internet video and gaming.

So why would anyone need the 300 Mbps router? There is some advantage to using the faster router, but it only provides an advantage for local network traffic, not traffic coming from the internet. For example, if you are frequently transferring a lot of data from a laptop to another computer on your network (such as backing up photos or video), you may want to go after the better networking performance of the 300 Mbps router. However, you still need to be aware that achieving the best networking speeds will depend on factors beyond the specs on the 300 Mbps router. For example, does the wireless card on your laptop support the higher data rates? Even if it does, how does your environment affect the signal-to-noise ratio between your laptop and the wireless router? The clarity of the wireless signal is going to affect your throughput. There are a lot of factors beyond the numbers on the router packaging that you will want to consider.

Most of the wireless devices I connect to my network are still Wireless-G, not Wireless-N. The Wireless-G standard achieves a maximum possible throughput of 54Mbps. That’s not anywhere near 150 Mbps – but it’s still plenty fast compared to the bottleneck of my ISP and Netflix comes through just fine.

So the next time you wander into your local tech store and ask about a product, just be aware that the person answering your questions may not know a whole lot more than you about what they’re selling.

Why Is Facebook Calling?

AllThingsD recently reported on a forthcoming Facebook phone that is in the works with HTC. A follow-up article offered some thoughts as to why you might want one, but I’d like to offer up some thoughts as to why Facebook might want to get this phone in user’s hands.

One simple and apparent explanation is that as more people go online using their smartphones, Facebook needs to vie for their users’ attention on mobile platforms. They are currently accomplishing this through apps on iPhone, Android, and Windows Phone platforms. Running a Facebook app on those platforms essentially provides an app interface to the Facebook web site, but there would be a lot more benefit to Facebook if they were further integrated into the phone.

It is to Facebook’s advantage to better understand the connections you have with the different people in your social graph. The better Facebook understands these relationships, the more valuable it’s social graph becomes for marketing and social commerce. Currently, Facebook can only monitor the dynamics of your social interactions online. They know what you “like” and they know what your friends “like,” and this information helps them send targeted advertisements your way. But this information only goes so far. Whom do you call the most? Which of your Facebook friends has the most influence over your buying decisions? What retailers do you like to frequent?

Enter the Facebook phone…Now, Facebook can begin to peer into your offline life as well, seeing not just who posts on your wall, but how often you have personal, live contact with those people. That friend you call every few days probably has more influence over you than the people who just post on your wall. And what about those friends who aren’t active on Facebook? Now Facebook can see your interactions with those Facebook “sleepers”. With a built-in GPS, Facebook can possibly gain insight into the stores and restaurants you regularly visit, and if your friends are using a Facebook phone, it becomes possible to see the friends with whom you spend time, eat, or shop.

This is all speculation to some extent, but we should consider Facebook’s reasons for venturing into the realm of Facebook-branded phones – they want to collect more and better data about Facebook users. Could Facebook actually get away with gathering even more data about our personal relationships and interests? Whenever a Facebook phone actually arrives on the scene, it will be interesting to see what kind of privacy controls are offered to users and how Facebook begins to use the new data it will be able to collect.

What are your thoughts on why Facebook might want to enter the mobile phone market? Leave your comments.

How can I quickly create UI mockups?

Notepad MockupsIf you’re trying to figure out the best UI layout for a mobile phone app, you might want to get a small 3″ x 5″ spiral bound notepad and use it as a quick testing platform.

I’ve been in the habit of carrying around a small spiral-bound notebook (about 5″ x 7″) for jotting down ideas. I used it a little bit for sketching out screen designs for mobile apps, but I had to consciously think about the size of a mobile screen compared to the size of the page on which I was drawing. However, I’ve recently found that a slightly smaller notepad is just about the perfect size for these UI mockups.

The size of a 3″ x 5″ notepad is pretty close to the size of a mobile phone, which allows you to get a quick feel for whether or not screen elements are sized appropriately. You can also hold the notepad in one hand to check the feasibility of single-handed use. The notepad is also small enough to carry in a pocket to quickly experiment with UI layouts while waiting in line at the store or other places around town.

If you’ve found a cool way to experiment with UI layouts, go ahead and share it in the comments.

How can I get Pretty Permalinks on a Windows web server?

If you happen to be hosting your WordPress blog on a Windows server, you may have run into the issue of trying to remove the index.php section from your URLs. I was just about to attempt writing an HttpHandler to perform the rewrites when I stumbled upon the following article.

http://learn.iis.net/page.aspx/466/enabling-pretty-permalinks-in-wordpress/

It just so happens that if you’re host is running IIS 7.0, removing those URLs is relatively simple using the built-in URL Rewrite functionality of IIS. The linked article provides a very nice summary of the steps you’ll need to take, but  it basically amounts to adding the following <rewrite> section to the <system.webServer> element of your site’s web.config file.

<rewrite>
  <rules>
    <rule name="Main Rule" stopProcessing="true">
      <match url=".*" />
      <conditions logicalGrouping="MatchAll">
        <add input="{REQUEST_FILENAME}" matchType="IsFile" negate="true" />
        <add input="{REQUEST_FILENAME}" matchType="IsDirectory" negate="true" />
      </conditions>
      <action type="Rewrite" url="index.php" />
    </rule>
  </rules>
</rewrite>

Since it took some time for me to locate the article in a Google search, I thought I’d post a link here to help document the solution.