Stopping ASP.net concurrent logins

I’ve been working on a subscription site, logins are powered by the ASP.net membership provider system.

Now I want to stop people sharing their logins and avoiding paying for a subscription. We don’t really want multiple logins from the same username.

After a lot of Googling I came up with an almost suitable solution:

http://www.eggheadcafe.com/articles/20030418.asp

The only problem for me is that this solution stops a new login. This means that if a user logs in, closes the browser, and then logs in again, they are denied a login for the next twenty minutes (or however long the session timeout is set to).

So I switched things around, and actually ended up with a slightly simpler solution.

In my asp:Login control’s LoggedOn event I have the following:

protected void LoginMain_LoggedIn(object sender, EventArgs e)
{
    System.Web.UI.WebControls.Login senderLogin = sender as System.Web.UI.WebControls.Login;

    string key = senderLogin.UserName + senderLogin.Password;

    TimeSpan TimeOut = new TimeSpan(0, 0, HttpContext.Current.Session.Timeout, 0, 0);

    HttpContext.Current.Cache.Insert(key,
        Session.SessionID,
        null,
        DateTime.MaxValue,
        TimeOut,
        System.Web.Caching.CacheItemPriority.NotRemovable,
        null);

    Session["username"] = key;
}

We concatonate the username and password as before and use this as the key for a cache item. The cache item value though is set to the just logged in users SessionID. We then add the key to a session variable.

In global.asax we have:

protected void Application_PreRequestHandlerExecute(Object sender, EventArgs e)
{
    if (HttpContext.Current.Session != null)
    {
        if (Session["username"] != null)
        {
            string cacheKey = Session["username"].ToString();
            if ((string)HttpContext.Current.Cache[cacheKey] != Session.SessionID)
            {
                FormsAuthentication.SignOut();
                Session.Abandon();
                Response.Redirect("/errors/DuplicateLogin.aspx");
            }

            string user = (string)HttpContext.Current.Cache[cacheKey];
        }
    }
}

This will fire for every request. If the username session variable exists then the SessionID is taken from the cache and compared to the current SessionID. If they don’t match then the user is kicked to an error page after being logged out and Session.Abandoned.

This means that a second user logging in will change the cached SessionID and login fine, whilst the original user will be logged out.

The error message explains what has happened and provides support for what to do. Which is of course either, stop cheating me out of cash, or let us know if you think your account is compromised.

Accessing ASP.net masterpage properties from content page

If you use ASP.net masterpages then sooner or later you will have discovered that you cannot access your masterpage properties using Page.Master or simply Master.

Previously I have used the solution you see floating around a lot, that is to put the following in your content page aspx (assuming our masterpage is called Main.master):

<%@ MasterType VirtualPath="~/Main.master" %>

And the following in your code behind for example:

Master.BodyClass = "pgKeyboard";

It works fine, although for every new page you make you need to remember to add the MasterType declaration, which has always frustrated me.

So the other day I realised, obvious though it was once I had, that I could drop the MasterType declaration and simply cast the Master object to the correct type.

For example:

((Main)Master).BodyClass = “pgKeyboard”;

So if you can’t access your masterpage properties or methods, just remember you need to cast the Master object into your masterpage type.

PayPal IPN, PDT & Analytics tracking, getting there.

As I’ve previously written about, I’ve been having some trouble with PayPal PDT ( Payment Data Transfer ) and Google Analytics e-commerce tracking.

If you’ve added Analytics to your PayPal thank-you page, and used PDT to get the data to be sent to Analytics and got that working nicely then you will have discovered that not every sale gets tracked, because not every shopper can be guaranteed to land on your thank-you page.

Now I’ve been doing some testing on an idea to get around this using IPN. The basic premise is this.

  1. Google Analytics works through a request for __utm.gif from the Analytics server, to which is attached all the tracking information.
  2. On the page just before leaving for PayPal set the analytics script to local only.
  3. Use url rewriting to hide a script behind your local __utm.gif.
  4. Record all the details for the request for the local __utm.gif in a database, referenced to the session.
  5. Send the session ID information through PayPal’s custom variable.
  6. User finishes sale.
  7. IPN script picks up sale, checks session ID and looks up the stored request.
  8. IPN script rebuilds request and forwards to Google’s __utm.gif.

Now to test this before getting involved in databases and IPNs I just made two pages. One with Analytics set to remote, the next page with analytics set to local. Then I made a local __utm.gif just forward the request for the remote __utm.gif through a server side HTTP request.

Wait a day.

No joy, nothing showing up for the e-commerce tracking, or the visit to the second page, just the first page visits where the script was set to remote.

Then I fiddle for weeks trying to improve the request. Adding all the cookies etc. etc.

No joy.

So I took a look at the urchin.js file to determine whether or not the local request was being built any differently to the remote one. And it was. If you have a look through urchin.js searching for the local/remote variable (forget the name of it right now) then you’ll see that there are extra things appended to the end of the querystring for the remote call to _utm.gif. The extra stuff is the content of all the analytics cookies.

So I fiddle urchin.js and made a local copy that built the same request for remote and local, including all the cookie data. I had a couple of people hit the two pages and a day later…

Success, there is e-commerce transaction data, items, totals, everything in Analytics now.

I used different sources and the like for a couple of test and those have also come through. Even more surprising though is that where I have had other people test the two pages for me, their locations have been tracked and assigned to the e-commerce transactions correctly. Which I didn’t expect as it’s always the web server making that request, and it’s not moving around!

So that’s something to work on. I’ll keep you updated on the next stage of implementation.

PayPal E-commerce Tracking with Google Analytics

Updated post: http://www.teknohippy.net/2008/03/04/paypal-ipn-pdt-analytics-tracking-getting-there/

Trying to track E-commerce transactions with Google Analytics (GA) and PayPal can be problematic.

PayPal has two ways of returning data to your server. Payment Data Transfer (PDT) and Instant Payment Notification (IPN).

PDT can be used to setup GA tracking by including the relevant GA scripts on a receipt page, processing the PDT data server-side and populating those scripts. Instruct PayPal that this page is your “return” page and when visitors return to it the e-commerce and goal data will be tracked.

The big problem with this is that PayPal users do not have to return to the receipt page, even if you’ve turned on the auto return feature. The pause before redirecting is very long and many people just abandon on the PayPal pages.

IPN on the other hand will always get triggered. The problem is it calls a script on your server from PayPal’s server-side scripts. This means your IPN script never runs in the users browser and so the GA JavaScript is never executed and no tracking will happen.

Now at it’s heart all the GA script really does is assemble a call to an image on the GA servers, __utm.gif to be more specific. Appended to this request is a lengthy querystring with all the information for Analytics to log.

It should therefore be possible to capture users info or the actual __utm.gif querystring on the page just before leaving for PayPal, temporarily store this in a database against the users SessionID, and pass the SessionID to PayPal as a custom pass-through variable.

The easiest way to collect the querystring from the pre-PayPal page is to set “_userv = 2;” in the tracking code. This means that __utm.gif will be requested from both the Analytics server and your server. Trap the call to _utm.gif with some server-side script and you’ve got the whole querystring very easily, as well as the corresponding SessionID.

The user is now off to PayPal to complete their transaction.

Then in your IPN script, lookup the SessionID and recover the __utm.gif querystring, change the utmp path variable for the path of the reciept page and then use your server-side script to make an http request to __utm.gif.

This will then at least track and goal reporting will work. The IP address will be wrong but the rest of the data will be correct.

More complex is building the querystring for the  __utmSetTrans() call. I’ve had a look into what’s sent and a separate call is made to __utm.gif for the transaction and each item within it.

I suppose it would be possible to set _userv to 0 and only have a local __utm.gif request on the checkout page. Then the Ecommerce tracking data could also be included. All the script hiding behind the local __utm.gif needs to do is record all requests when visitors hit the checkout page. It can pass on the actualy request for the tracking (but not the transaction), when it happens. Then in the IPN script the tracking request path can be rewritten and replayed, followed by the transaction requests. Sounds like it should work.

I guess I should go make it happen and test it!

Natural Language Search

Powerset (http://www.powerset.com) looks interesting. At present the company is in “semi-stealth” mode. Gathering investment as well as developing their natural language search engine.

Powerset’s Barney Pell has someinteresting stuff to say on the topic.

http://www.barneypell.com/archives/2006/10/powerset_and_na.html

Well worth a read, some interesting concepts with respect to the way that search engines work nowadays.

Search engines are keyword based and at their heart are really just boolean based searches against their index. They take your search term and stripout out what are known as stopwords leaving just the keywords. Stopwords are words such as a, about, from, of, for and the like, these would only complicate the results of a boolean search as they are such common words.

Pell and gang demonstrate that in some searches these words are acutally useful. For example, take these three search terms:

  • Books for children
  • Books about children
  • Books by children

When for, about and by are all stripped out we are only left with Books children, and the search engine cannot distiguish between the three quite different purposes of the queries.

Pell says that we are all searching with an impovourished, pidgin english at present, I for one would welcome a more natural approach at times. I’m sure like me, many of you have sometimes come across a particular search that never seems to get the results that you’re after, or at the most it takes a long time to get the right string of keywords and advanced search options. Imagine what searching is like for the less techinacally minded out there who don’t speak keywordese. NL searching, if it works and is marketted well to that larger group of people, could be very successful.

If though, when it launches, it doesn’t have a toolbar-esque plugin then I will find it very difficult to remember to use it. When I want something my mouse cursor always goes straight for my Google toolbar.

Windows Live Writer

I’ve just downloaded Windows Live Writer (WLW), Microsoft’s desktop/offline Blog composer/writer software.

Set up was very easy, I simply game the address and login details of my blog. Not only did WLW determine what my blog was running on, but it also has downloaded and used my CSS.

Editing in the WLW is therefore truly WYSYWIG as I am typing I am seeing everything styled by the CSS from my site. I’ll do a screen shot.

Click for bigger pic

There you go, all I did was ALT-PrintScreen and paste. Looks like it’s inserted a thumbnail with a link. Ah yes it has, there are plenty of handy properties to choose where it links to and how big it is etc.

I’m quite enjoying this, let’s try and publish now.

– edit –

Well that published fine, I forgot categories as always though. I can also do those here, it’s picked them up. Can’t add any new ones though. Still I think I may find this a useful place to store the longer type of heavy thinking articles.

Ahah! I have just discovered you can also open and edit from a list of existing posts. I was wondering how you did that. Overall this is quite a handy utility.

http://ideas.live.com

PHP Wiki Software and Skinning

Having recently acquired some unix hosting I’ve been experimenting with various PHP/MySQL based applications. WordPress for example that this blog runs under, as well as Joomla for CMS and phpBB for forums.

Having been a long time user of ASP applications some of my experiences have been quite refreshing, especially with respect to ease of installation on shared space in some cases.

Now just the other day I decided I needed a Wiki for a new project. I’ve been using the ASP.net driven FlexWiki for some time now and have been very happy with it’s ease of install and the small changes I usually want to make to look and feel. When it came to choosing a unix based option I immediately plumped for MediaWiki, it seemed an obvious choice being that it runs the most famous Wiki out there, WikiPedia.

Oh how wrong could I have been though. When it came to skinning it to get a look that suited my purpose, it became a complete nightmare. The skinning system is a complete disaster area and it requires more work than I have time for to get your head around it. Yes it can be done, as witnessed by the Mono-project website or the Mozilla Dev site, but it’s a complete bitch. The mixture and muddle of markup and PHP code is just simply unprofessional.

So I had a look around and after a while searching I came up with a beautifully easily skinnable Wiki software. PMWiki has a great philosophy behind it’s code base and a simple skinning system that is a joy to behold after wading through the nightmare depths of MediaWiki.

I shall enjoy designing a nice template later tonight.

http://www.pmwiki.org

Recent Webserver Downtime

Apologies for the recent downtime on teknohippy.com, this is due to my ISP, nildram, not being able to even provide what is essentially their core business, ASP has been going tits up since the weekend on and off.

I’ve phoned many times today, it’s been fixed, then it’s gone just as wrong again with the same symptons.

The symptons are very similar to those we had locally recently which turned out to be caused by a worm, not that they wanted to listen to me when I was telling them this.

Support used to be great at nildram, and I would have recommended them to anyone a year or two ago. The thing is over time as they have become bigger and bigger and more succesful the intereaction between client and supplier has become more and more formal and as a result less useful.

Soon we shall move somewhere else I think, our main Windows 2000 server has been up and down all day, this means that the thousands of pounds being spent on google adwords every day for a variety of clients sites on this server are simply going to waste. Not good.

Well it looks fixed for now, let’s see how long it stays up.