Monday, December 28, 2009

Notes Error: Maximum allowable documents exceeded for a temporary full text index

While executing FTSearch method on NotesView class you might get an exception which says "Notes

Error: Maximum allowable documents exceeded for a temporary full text index" 
The exception occurs when Full Text indexing is disabled on the database and the number of items in the database is huge.
When we try to execute a FTSearch method on such a database a temporary Full Text index is created on the database. The number of items which can be indexed by this temporary index is set to some default value and if the number of items in the database exceeds this default value then we get this exception. We can override this setting in server's notes.ini using the following parameter 


TEMP_INDEX_MAX_DOC=number


Once we set this number to a value greater then number of items in the database the API should start working. 


More details can be found here.


~Abhishek

Accessing Lotus Notes Database from .NET

I am working on a project where i need to get some data our of Lotus Notes application and found a couple of ways to extract the data out. 


1) Easiest way to extract the data is to use Lotus Notes SDK which gets installed on the system when the Notes Client is installed. 
You can set a reference to domobj.tlb and generate a interop assembly to call the APIs. The API documentation can be found on here
Some of the APIs like search on the NotesDatabase doesn't seem to work and if that's the case then we can use the second approach.


2) The best part of lotus notes is that the data is exposed using ODBC. We can install the ODBC driver can be downloaded and installed from here.Once done we can use the classes from System.Data.Odbc namespace and access the items and their properties. 


Will post some tweaks when i start using these things....


~Abhishek

Wednesday, October 14, 2009

Resource Oriented Architecture - Part 5 (Connectedness)

One of the important features of web is its connectedness. i.e. almost all the web is interconnected through something called as hyperlinks.
So any resource which is addressable on web can be connected from another resource using a hyperlink. In an application based on ROA all the resources should be connected to each other. Connectedness can be achieved in an application if we choose the right representation for a resource. Xml is used to represent any entity in a RPC based architecture. Xml is a default choice because of the fact that it is structured way to represent the entity, hence can be easily understood by computer programs.
As of today the web service returns the data in the form of Xml to make it machine readable while web application represent the data in HTML format in order to make the data human readable. However what we forget is that there's another representation called XHTML which is nothing but HTML which is also a well formed Xml.
So if we can represent all the resources in XHTML computers can parse it as Xml while human can read it on browser as HTML. This also gives us a way to interconnect all the resources in a ROA based application using hyperlinks.
There's no such rule which says that resources should always use XHTML however that would be the preferable since it merges web application and web service and hence make the life easy.
We can still use Xml to represent the resources however in such a case the link to relevant resources should be embedded in the representation of the resource in order to ensure that the whole system is navigable if the user has link to one resource within a system.

This concludes the 5 part series about resource oriented architecture. The next logical step for me would be to design an application using ROA tenets, which involves identifying the right resources, their representation. I might take an enterprise application to do his exercise.
Hope to find some time to do that....

~Abhishek

Thursday, October 8, 2009

Resource Oriented Architecture - Part 4 (Uniform Interface)

The RPC style services are very good in creating and documenting the contract. However if i look at it from a high level view i am creating a new interface for every service in the world.

Imagine this i can write so many services to manage a CRM account e.g. createAccount, updateAcccount, deleteAccount, addOppurtunity, winOptions, looseOptions so on and so forth i.e. for every conceivable action i want to take on a account i can probably create a new method in the contract. This becomes so difficult to manage when orchestrating the services to come up with a business process. Because its very difficult to understand what services can be called at what stage in a business process.

Now when i give this to a programmer who's building an orchestration using these services. There's no uniformity in the interfaces which allow the programmer to come up with a pattern or a way to know what might be possible in the system. One has to be a domain expert to work with such a system.

That's why ROA proposes to have a uniform interface which should be exposed by the resources inside a system. i.e. once you know the URL of a resource it can support upto 6 methods which are nothing but http verbs. Let's have a look at them

1) OPTIONS :- This is a metadata based verb, i.e. it is supposed to tell the caller as to what all http methods does the resource support.
2) GET :- As the name implies this would return the resource content to the caller. The format of content is another story and we'd discuss in the next tenet.
3) HEAD :- This is supposed to return the http header information to the caller. e.g. when was the content last modified. This is an important verb as based on this we can take advantage of http caching infrastructure.
4) PUT :- This verb is supposed to create or update a resource. Ideally if the request is sent to a non existant URI then we are supposed to create the resource while if the resource exists on the server then it is supposed to be replaced by the new one
5) DELETE :- As the name implies, it can be used to delete or archive the resource.
6) POST :- This is one of the most open ended verb which has been left out in http and hence the most abused one by SOA. Anyways in a ROA this verb can be used either to append content to an existing resource or to create new content where the URI of the newly created resource would be defined by the server.

All in all we can conclude that a resource can support CRUD operations using PUT/POST, GET, PUT/POST, DELETE http verbs, while a resource can define the supported operations using OPTIONS http verb. HEAD is a verb which can be used to take advantage of caching mechanism.

Now once we've this kind of system in place we've to ensure that we define our system in such a way that each entity, state or a stage of business process can be represented as a resource. So in our example of CRM system we can create 3 resources i.e. Account, Oppurtunity, Option
and then both these resources can support GET, PUT, POST and DELETE operations. So createAccount becomes a PUT request to http://crmsystem/account/myAccount/1234, deleteAccount becomes a DELETE request to http://crmsystem/account/myAccount/1234,
addOppurunity becomes a PUT request to http://crmsystem/account/1234/Oppurtunity/1,
winOptions, looseOption can become a POST request to http://crmsystem/account/myAccount/1234/Options/1

The beauty of the whole system lies in the fact that each entity has a URI and each URI supports a uniform interface. So when i give a URI of an opportunity http://crmsystem/account/1234/Opportunity/1) to a programmer in my company, at the very least he knows the following
1) To get the details of the opportunity he has to hit the URI with http GET request
2) To delete the oppurtunity he has to hit the URI with http DELETE request
3) To update the oppurtunity PUT or POST should help
4) To know what is supported OPTIONS can be used.

In this world we literally can live with these verbs and most if not all programming problems can be broken into resources. The business process orchestrations can stream line themselves around these verbs and the world might become much simpler for us as programmers :).

~Abhishek

Resource Oriented Architecture - Part 3 (Addressability)

URLs is one of very simple and powerful feature of WWW. It is because of URL that everything is discoverable and locatable on the web.

e.g. a builder putting his URL on an advertisement bill board and everyone who hits that URI gets access to all the resources about a project which builder has put on the web.

If we look at a search results of google each page of the search results is a resource and each page has a unique URL.
e.g. http://www.google.com/search?q=India&start=35 will take me to a page which would have the search results for the query "India" and the page contents would start from 35th search result.

In a resource oriented world each resource inside a system would have a unique URL to locate it. e.g. in a CRM system all the accounts would have a unique URL to navigate to them. Which basically means that all the accounts are addressable. If we shoot a "http GET request" to that URL we should be able to get details about that account.

So an account with ID=1234 can be represented as http://crmsystem/account/1234
Even a collection of accounts is a resource e.g. all the accounts from Hyderabad can be resource found by sending a http get request to http://crmsystem/accounts/hyderabad
Note the subtle difference between the URLs. The first URL contains the string account while the second one contains accounts, which signifies that the first resource is just an account while the second one is a collection of accounts.

This approach is very different from classic SOA applications where we'd write 2 web service operations called getAccountbyId(string accountId) and getAccountsForRegion(string region). Both the operations will have a single URL which would be http://crmsystem/services and can be invoked by sending appropriate SOAP envelope to this URL using http POST method. Its not really intuitive for a user or a machine as to what kind of POST requests one is supposed to send on the URL.

Another important aspect to note about this tenet is that a URL is non-ambiguous and can point to one and only one resource, while a resource can be located by using 2 different URLs. e.g. an account with accountId=1234 which is a top performing account can be represented by 2 different URLs i.e. http://crmsystem/account/1234 and http://crmsystem/account/topPerformer. In this case the URL http://crmsystem/account/1234 points to a static resource and ideally won't change while http://crmsystem/account/topPerformer is a dynamic calculated resource and can point to different resources at different times.

This tenet will closely relate to the next tenet i would discuss in my next post in this series i.e. Uniform Interface.

~Abhishek

Monday, October 5, 2009

URI, URN and URL why 3 terms

I am attending a training and an interesting question came up while discussing WCF i.e. what is the difference between the terms URI, URN and URL and most of the times we do see the terms being used in various documents and we hardly care about the fact that they are more or less different acronyms and there has to be a difference.
And turns out there's a subtle difference
1) URI :- Uniform Resource Identifier
This is like a base class. i.e. if this term is used then it can mean both URN and URL. Simply signifies that both URL and URN can be used in the context.

2) URN :- Uniform Resource Name
This is a URI which can ensure the uniqueness of a name in a given context. e.g. URN "Flat no. 302". Its unique in an apartment. but apartment name is not present in the URN. We can say that a URN can be used for identification but can't be used to locate a resource without a context

3) URL :- Uniform Resource Locator
This is a URI which can be used to locate a resource. This also is a URI and can ensure uniqueness alongwith the mechanism to locate it. e.g. http://luckyabhishek.blogspot.com is a URL which is unique and also communicates the mechanism to discover/locate it.


Interesting and subtle differences pointed by Ramkumar (Our instructor)

Let me go back to training now as Ram has already figured out that i am not listening to him and doing something else. Alas .. he'd not know that i am blogging the discussion we just had ....

~Abhishek

Wednesday, September 23, 2009

Unit Tests for registry read fails in Visual Studio on 64 bit machines

My my my...
A hotch potch of Windows Server 2008 and Visual Studio 2008 on a 64 bit machine killed my 3-4 hours today...

So here's the deal
I've a method in one of my dlls which reads a certain value from a sub key of HKEY_LOCAL_MACHINE\SOFTWARE Then i wrote a unit test to test this method. Turns out it fails to read the Registry. I was amazed because registry entry did exist in the registry. After struggling for about an hour or so i decided to write a console application to test the method. And the method worked :O. So now i was in a situation where a method works from the console application while fails from a unit test.

I thought that its a permissions issue so i gave full trust to both the assemblies i.e. unit test assembly and the assembly i was testing. Even that didn't help.
Looking at the task manager of the system i saw that the unit tests run under a process call VSTestHost.exe. So next i tried to give full trust to this exe however that's not possible since it is a win32 exe.

A relook at the Task Manager showed me that the process runs under VSTestHost.exe*32 means a 32 bit process running on a 64 bit OS. Nothing suspicious about it in the first look. However if you look at the nodes below HKEY_LOCAL_MACHINE\SOFTWARE in the registry you see a node called Wow6432Node and that made me think. After some search i figured out that if you run a 32 bit process on a 64 bit machine and try to read the subkeys of HKEY_LOCAL_MACHINE\SOFTWARE The registry reads are actually directed to HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node. So i made an entry in this node and it worked.

Lesson :- Be careful while working with registries if you depend on registry for your program to work.

Happy coding :)

~Abhishek

Thursday, September 17, 2009

Making SQL Server Replication work on Network Service Account

This is one of the topics i have been working on since last few days. There's not much information available on the web on how to do that so i decided to log it in this blog.

Problem Statement :-
Setup a SQL Server replication while running the SQL Server in "Network Service" Account.

Normal Convention :-
Most local SMEs i spoke to suggested that it is not possible to setup replication between SQL Servers using the Network Service account and suggested that i use the domain account instead. However the problem with that solution is that domain account password expires in some time and then we may even get a production downtime.

Proper Solution :-
The SQL Service like any other windows service can be run using Local Service, Network Service or a Domain Account credentials. Now if you want to setup a replication Local Service account is of no use since that is an account which has no identity over the network. Domain account is not ideal since passwords have to be changed at regular intervals resulting in downtime as well as maintainance costs. So Network Service account is the ideal way to go. This account presents the machine identity over the network.
So if we want to setup replication using this account the machine identities should have access to databases. We can create a security group in active directory and add all the machines which want to talk to each other in replication should be added to this group. Then give the permission to this security group on all the databases.
This way when the replication service on one machine sends the replication related instructions to another machine it presents the machine credential which has the permissions on the database, the replication has no security related problems.
so to summarize
1) Make sure that all the SQL Service and SQL Agent Service is running using Network Service credential
2) Create a security group in the domain and add all the SQL machines which would be part of replication to this group
3) Give this security group appropriate permissions on all the databases which would be part of replication.

Happy coding...

~Abhishek

Monday, September 7, 2009

Resource Oriented Architecture - Part 2 (Statelessness)

The idea of writing a web service was that you should have a contract to work against. The policy, address etc. to invoke the implementation of this contract can be figured out at runtime. A component on remote machine will process my input and get back to me with some output based on the design time contract which i programmed against.

Now this works great once i take a look at the tools which are available to me as off today ... I've a WCF framework based on which i can define my operation contracts, message contracts, data contracts etc. in the code and the security policy, binding and address on which i want to expose my contract can be decided at the time of deployment in a configuration file. The best part is that in most cases i do not even need to care that i am writing a service which would be used from a remote client.. since even the things likes session management etc. can easily be handled by the framework. I feel like i am almost doing OOPS based programming. However if i do use the Session in my service an additional overhead has been added to the service infrastructure of maintaining this session. Which means that i put the scalability challenges in my code. Because to support Session i've to use Session Affinity or and Out of Process Session Management both of which are not very bright ideas for scaling out the application.


The World Wide Web is a very scalable architecture and one of the reasons for that is its statelessness.
Let's try to understand what statelessness means for an application....
In any application we write there are 2 states i.e. state of the client and the state of the resource which the client is asking for. When we walk about statelessness we mean that the server should only be concerned about the state of the resource it is serving to the client. It should not be concerned about the state of the client. e.g. when i type a url http://www.google.co.in/search?q=ROA&start=80 in my browser google returns me the result for my query ROA starting with result no. 80 i.e. page 9. It doesn't care whether i clicked on last 8 pages or not because that is the state of the client. The state of the resource is present on the server and is being served no matter what is the state of the client querying for it is.

While designing a resource oriented application we should be very clear in defining what is the resource that we're exposing, what is the state of the resource and what can be classified as client state. Then we should ensure that the client state is maintained by the client and sent to the server when needed in some form (most as part of URL. Wait till my post on addressability and uniform interface).

~Abhishek

Resource Oriented Architecture - Part 1

Defining the Agenda :-

If i look at it SOA has been one of the biggest BUZZWords I've seen since i came into programming world. I came from a Object Oriented Background and WebServices or RPC Style web services looked like a perfect way to build a SOA based application to me however off late i am getting a feel that writing web services the way we do today is probably not the perfect way to make applications.


In this 5 part series i am going to talk about the various tenets of Resource Oriented Architecture and figure out what next steps i can take to understand or define a Resource Oriented Architecture for a business process......

~Abhishek

Microsoft TR9 @ Seattle

I attended Microsoft TechReady 9 in Seattle in July and have been away from this space ever since :). TR9 was one of the great experiences i had in Microsoft in my 1 year @ Microsoft.
For someone who doesn't know what TechReady is, its basically biggest Microsoft Services event where Microsofties from the field get together to share the learnings and to get a gist of what's coming next from the product groups. Now i can't share the details of what i saw there in terms of upcoming products because of obvious reasons however as i see it .... the future looks bright.
I also got an oppurtunity to present on Developing Secure RESTful services using Microsoft WCF 3.5.
It was a great experience and the attendee party in The Commons @ Redmond was great :)
All in all nice experience and i got a new topic to work on i.e. ROA and RESTful services....

Stay tuned as i am now working on ROA so you would see some posts coming soon and one might be out in a couple of hours or even less ....

~Abhishek

Tuesday, July 21, 2009

mysqld doesn't work on Windows 7

I installed MySQL 5.1 on my Windows 7 installartion. I didn't use the windows service option as i wanted to run my server from command prompt.
Now when i tried to start the server using mysqld it did nothing. So i used mysqld -help option to figure out what's going on and it turns out Windows 7 UAC was at it again. So now i have to run the command prompt as administrator in order to ensure that the database works :)

Happy coding....
~Abhishek

Tuesday, June 30, 2009

Http Module not loading in IIS

For the project i am working on we created an Authorization component which loads as Http Module and works on the user credentials.
The module was defined in web.config for the web application and was working fine. Until one day i tried to play with App pool settings of IIS and then the module just stopped loading.
After doing some searches on web i finally figured out the problem. While playing with the App pool settings i had changed it to run in Integrated mode from the Classic mode.
When i run my application pool in the Integrated mode the module has to be specified in web.config as an entry under system.webserver tag while if i am using the classic mode the http module has to be specified under system.web tag of web.config.

How this matters and what these 2 modes mean is something i still have to figure out but for now i am happy that my module is up and running for me to go ahead and work on my tasks :)

Happy coding

~Abhishek

Thursday, June 18, 2009

HTTP Error 503. The service is unavailable.

Another one of those errors which drives a developer crazy.
One of the reasons it could be occurring is because your site is running under an AppPool which is unable to start because of reasons like wrong identity etc.
The problem is that it is very difficult to catch this since the site seems to be up and running in IIS 7 management console.
Once you reach the place where all the App Pools would be listed you might see that the App Pool is not yet started and hence the problem. There's no way that the Management Console would give you a hint as to what might be going on. At least on the Console it looks like everything is up and running fine.
So next time you see a Http Error 503 you can go and check the App Pool settings before doing anything else.
@IIS Console team :-It should be easy to figure this one out and display an error to administrator saying something is wrong with App Pool settings.

Happy coding...

~Abhishek

Tuesday, June 9, 2009

Name of the server on which a query is executing

Sometimes while executing a batch of queries it is a must to find out what server the batch is executing on. e.g. while creating a linked server you might not want to create a linked server for the local server.
This can be achieved by querying the sys.servers table and check for the server with server_id 0.
so the query to do this is

select name from sys.servers where server_id = 0

Happy coding...

~Abhishek

Monday, June 8, 2009

Synonyms and Linked Servers in SQL Server

Someone in my project wrote a perl script which used to generated a SQL script to create synonyms to other databases.
Now this SQL Script to create synonym used to run after the required databases were deployed and it uses to work fine till we went into the pre production deployment phase of the project. And as soon as we reached to this environment the setup itself started failing. The only difference between this environment and the test environment was that the databases were hosted in different machines.
After some investigation we established that the generated SQL script didn't take this into consideration. It was working fine in dev environment since all the databases were hosted in a single machine. As soon as the databases were separated the names of databases we were referring to in the script became invalid.

Linked servers came to rescue in this situation. Basically we added a Linked server to the server we wanted to create synonyms in and qualifies the table names with the linked server names as well.

On digging further the Linked server is an important and useful feature in SQL Server since it can not only make one SQL server interact with another, it can actually make SQL Server interact with any other OLE Source.
We can even use it to do the distributed transactions across various data sources.

Quickest way to Add a linked server to the database is to use the wizard in SQL Server Management Studio. One can also use stored procedure sp_addlinkedserver in the master database. sp_serveroption can be used to configure the options on how to connect to this linked server.


More information can be found here.

Friday, June 5, 2009

Enabling SQL Server to access directories...

I am working on SQL Server nowadays and while writing a component to restore a database programatically from a backup file one needs to access the file system from within the database.

Now normally a database won't allow you to do so as ideally a database shouldn't depend on anything outside it. Even the metadata of database's own schema is stored within the database (One of the basic principles of database).
In order to enable database to access the file system we can use the following script...

exec sp_configure 'show advanced options', 1
go
reconfigure
go
exec sp_configure 'xp_cmdshell', 1
go
reconfigure
go
exec sp_configure 'show advanced options',0
go
reconfigure


Happy coding,

Abhishek

Friday, May 29, 2009

Disabling Enhanced Security in IE 8 on Windows Server 2008

I have to install some pre reqs on windows server 2008 and i started with browsing the intranet sites to find the pre reqs.
Now the problem on a windows server 2008 is that all kind of security features are installed by default and hence none of the intranet sites were working. I am unable to download any file either.
The quick fix for this is to disable enhanced security feature of IE8 on Windows Server 2008. And this can be done by going to
appwiz.cpl -> Turn Windows Features on or off-> Security Information snap in-> Configure IE ESC
and disable it.

And now the IE works normally. Its a good trick to work in development environment and definitely not recommended on a production server .

Happy coding :)

~Abhishek

Wednesday, May 20, 2009

Resgining a 3rd Party Assembly

Many a times we've to ship 3rd party source code alongwith our code. The catch comes when the 3rd party assembly is an unsigned assembly while you have a strong need to sign the assembly being shipped by you.
If you try to refer an unsigned assembly into an assembly which is being signed then you get an error which says "Assembly generation failed Referenced assembly ‘xxx’ does not have a strong name"

The easy solution to this problem is to resign the 3rd party assembly using your key.
You can fing out how to do it on the blog by allkampfer on this link.

Happy signing :)

~Abhishek

Friday, May 15, 2009

params keyword in C#

Its strange but true that i got a requirement for the first time today where i wanted to pass variable number of parameters to a method i was writing in C#. Now i always knew that its possible however as soon as the requirement struck i just didn't have a clue about how to achieve this.
And so i hit the msdn and boy its so simple.
You just need to use params keyword to achieve this

So a method which needs multiple parameters of different type can be written as

public static void MethodName(params object[] parameters)

And it just works :)

Happy coding....

~Abhishek

Thursday, May 14, 2009

Using Fusion Logs to debug the assembly binding failures

While working with WCF i wrote a custom Client Message level interceptor and as usual WCF tries to load the assembly containing the interceptor type using reflection.
While doing this WCF runtime threw an exception which suggested that the Assembly Binding failed however no information is supplied regarding why the binding is failing except that it failed with HRESULT 0x80131040
In order to find the exact reason we can enable the fusion logs and check the actual binding steps going on in order to find the actual problem in binding
To do this following steps should be used to check the binding process
1) Make a DWORD entry to the Registry HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\FusionEnableLog and change its value to 1
2) Open the fusion log viewer fuslogvw from the command prompt
3) In the settings of log viewer select the option to log the binding failures to disk
4) Restart the process which is having binding failures
5) Refresh the log viewer and you'd see the exact reasons why the bindings are failing

Happy coding :)

~Abhishek

Tuesday, May 5, 2009

System.CodeDom.GeneratedCodeAttribute for suppressing the generated code warnings

For one of the utilities i am writing i am using the resx file to store various blocks of code and generating a csharp class file based on a xml file. Because of dynamic generation of blocks i am not able to indent the code properly and hence when i compile this code i obviously get style cop warnings in my project.
When i investigated further i saw that style cop warnings are not displayed in the designer files generated by Visual Studio for ASP.NET and WinForms application. One of the differences i found in the files generated by Visual Studio and files generated by my utility was that visual studio puts a tag <auto-generated> in the file header while i put a tag called <autogenerated/> in mine.
Any ways changing this didn't affect it at all and the warnings were still generated in the compilation log.
So i put a System.CodeDom.GeneratedCode attribute on my class and bingo all the warnings were happily ignored by the compiler.

So if you are writing any tool to auto generate the code use this attribute to suppress all the warnings from the auto generated code. Of course you need to be sure that warnings are not harmful and are ignorable warnings (In my case they were all stylecop warnings and no fxcop warnings)

~Abhishek

Style cop and helper utilities

One of the basic necessities when you are working on a project is that the style cop warnings in your code should be zero. If there is a single style cop warning in the code one can run into several issues like
1) I don't like it when there's a warning in the Visual Studio Compile log
2) I don't like it when my build log on the build server becomes several hundred kbs just because of style cop warnings and to find wh

For the records style cop is a visual studio plug in which ensures that the code is written in a consistent way i.e. tabs, indentation, regions etc. are defined very clearly in the code. It can even be integrated into msbuild so that the build points out the errors and warnings.It can be downloaded from msdn if you want it for your project.

Once you start working with Style Cop you'd find it difficult sometimes to take care of every warning on yourself. The trick is to do the hard work early on by fixing each and every warning manually and then slowly you'd become habitual of writing the style cop compliant code.

Here are a couple of utilities which can help in case the warnings are huge and there's a time constraint on delivering the code

1) NArrange :- Takes care of beautifying the code. It can be downloaded from here. Use it carefully for the existing files as it rearranges the whole file according to style cop rules and more. It clearly demarcates the regions for fields, methods etc. The catch is that if you already have a version of a file in your source control the tool will rearrange the code in such a way that diffing the file against the version in source control is not going to help much.

2) GhostDoc :- Its an intelligent documentation tool which generates the documentation for the methods based on parameters. It can be downloaded from here. Most of the time if you are using the proper naming conventions the documentation generated by this tool works pretty well. However some time you might want to review it. What's more it can even take your customizations done on a method in interface to the class automatically.

So use these cool utilities and write your code using style cop.
Happy coding....

~Abhishek

Monday, May 4, 2009

Bad Request in WCF REST services

While working with WCF services we tend to fall on FaultException or FaultException class to send the errors occuring on WCF server to the WCF client.
Now this works with most of the bindings on the WCF framework. However if one is using WebHttpBinding in order to expose the service as a REST service the FaultException or FaultException doesn't work. i.e. if you throw this exception from the server the client gets a Bad Request error. Now this is funny and confusing. I was testing my service by sending raw xml in http requests and when i saw a Bad Request Error i was investigating the schema of the request instead of debugging the service.
After checking the schema many times i finally changed the data in the xml and bingo it worked. And that kind of made me understand the any exception on the service results in a Bad Request exception in the client side.
Now this is not the way we'd like our exceptions to go from server to client. We want to send more specific details about what happened so that the client can take appropriate corrective measures.

One way to achieve this is to define a ResponseStatus enum and include a value of enum in the ResponseMessages from the Service Operations. This enum will keep growing as we want to send more details about the exception and then instead of throwing an exception or fault from the service set the responseStatus in the Response message which the client can use in order to take corrective actions. This obviously is an overhead since this is not the way we'd like to communicate exceptions while using Non WebHttpBindings.

Another way could be to use extensibility features of WCF to set the http status code and status description in the http response. We can intercept the response from the service just before it is sent to the client and set the details appropriately. I want to explore this option in detail if possible however for now i'd go with first approach as my customer has asked me to provide a REST service by today evening :).

~Abhishek

Unable to see the $Exception details in Visual Studio

I was away from coding in Visual Studio 2008 for a while, instead i was using it for drawing class diagrams and reading some code.
While i was doing that i disabled some package because of an error and then as it turned out i was not able to use the Exception details view in the Visual studio. i.e. If i ask visual studio to break if an exception occurs by going to Debug -> Exceptions it breaks but doesn't show me the details of the exception which occured.
As soon as this happened i went to Visual studio output window and saw a message in the general category which said that Visual Studio is unable to load package "8D8529D3-625D-4496-8354-3DAD630ECC1B" and i should use "devenv /resetskippkgs" to load the package.
As i tried to do that the package still failed to load without giving any reason as to why its not loading the package.
On doing some searches i figured out the problem happens many a times in Visual Studio 2008 and the solution is to repair the .NET Framework 3.5 SP1 from Add Remove Programs snap in.
Once the repair is done use resetskippkgs option to start Visual Studio and the problem is resolved.
I am not sure if this is the most efficient way to solve this problem but it works nonetheless :)

~Abhishek

Friday, May 1, 2009

Consuming WCF services on SSL

Okay so its been ages since i posted something here and just now i struggled with a problem which i struggled with so many times during the past 2-3 months. So i decided to document the solution here as it might help someone else or me to fix this sometime in future :).
So I am working with a WCF service exposed over IIS with SSL binding.
I setup a WCF client for consuming this service and everything was supposed to work out of the box. However there are hiccups which one faces. And the worse is that you start getting security exceptions which give hardly any information about what the problem could be. All i was able to understand was that there's a problem establishing a secure connection over TLS/SSL.
So here are a few troubleshooting steps
1) If its the development/test environment and you are using a SSL certificate issued by an internal authority then ensure that you have the root certificate authority installed in your trusted authoritites store. (In a typical development environment certificates are stored on a shared folder and all the people in team use different copies of certificates from this shared folder. Sometimes certificate authority certificate has versions and the one you trust is a different version itself). So the safest option is to hit the service from IE. Check the certificate , export the root certificate authority in a .CER file and install it in your local machine store. Byu default the wizard will install it in your personal store though its a good idea to copy it to local machine store because you might be using some other account to consume this service.

2) Another problem one faces quite often in the development environment is that the SSL certificate is issued to a specific machine while multiple developers use the same certificate to work in parallel. In such a case the consumer of this service might face an exception as the name of the machine to which the certificate has been issued is different then the name of the machine it is being presented by. To skip this check we can configure the ServicePointManager component in .NET to skip the name verification process. To do this add the following section to your config file for the WCF client....







I'll update this post as i face more issues with SSL......

~Abhishek

PS :- I am working on a reusable Performance counter application block which i might publish on this blog in some time.

Monday, January 5, 2009

My day out with GPS in Hyderabad

Hey all Wish you all a very happy new year 2009. Hope your resolutions last at least through first couple of weeks of the year :).

I recently managed to get a 30 day free trial license to use GPS on my N95 mobile phone. I've been in Hyderabad for almost 4 months now however i am still not very comfortable with finding my way out in Hyderabad. Now today i had a doctor's appointment at 4.00 PM (at Banjara Hills) and an official appointment at 5.00 PM (at Gachibowli). Since I was driving with time limits i decided to use the GPS to avoid asking people for the way to Banjara hills. (Its quite normal and convenient to ask your way in India if you don't know it. People are more then happy to help). Now first thing which i had discovered when i had installed the maps a couple of days back was that for N95 maps Hyderabad ends near Madhapur (@N95 Maps Team :- Wake up.... most of the people using your maps will be living and traveling near Hi Tech city, Gachibowli, Miyapur, Madhapur area)
So i decided that i'll drive till near Jubilee hills and then GPS should start guiding me. As it happened as soon as i was near Jubilee hills the voice guidance started. Now this GPS software took me inside a slum with its directions. I didn't even have a choice to make a U turn since the slum lanes were so narrow that one can't make a U turn on a single road and to get out you need to be an expert. The guidance then maneuvered me across the slum quite efficiently but it was hell stupid experience. After i crossed the slum at a speed less then 5 Km/Hour it took me out of the slum from a road which was so steep that my car was fumbling to climb up even in 1st gear. I somehow managed to maneuver my car on that road without coming back and hurting the car or other people.
Then after some more untarred roads i finally found a wide road and then after crossing that road the guidance was leading me to another slum, and so, i gave up on GPS. I fell back on the good old way of asking people and driving and i was at the doctor's place in no time. However the whole GPS idea got me late by almost 20 mins. for the appointment. Now this cascaded as doctor was busy with other patients and of course i missed my official appointment because of that. The worst was yet to come. I decided to inform my colleagues on a mail that i would be late however installing GPS had messed up with my mail4exchange software and the phone kept crashing :(. And now it seems i have to uninstall the maps somehow and pray that mail4exchange works without a firmware re installation.
Also once i switch on the GPS if i try to use my memory card in phone for data transfer it keeps saying that card is in use by phone. So i need to restart the phone before i can even think of doing data transfer.
So overall GPS voice guidance for N95 sucks and i wouldn't recommend anyone to use it in Hyderabad at least.

~Abhishek