Wednesday, November 4, 2009

Code Access Security

In the environment of network development, one always has to think through security issues in order to traverse servers, shares, locations, processes, impersonations, etc. However, there is a deeper level of security that can throw you for a loop if you haven't had to deal with it before: code Access Security.

HISTORY
In the wars that raged to produce extensible code, Microsoft pulled a hail-mary by releasing ActiveX. It sounded cool, but roughly all it was was a slap-back to Java's virtual machine (JVM) applet engine which allowed an installable library to run on a person's machine, and to be executed and controlled via remote procedure call (RPC). An example of this might be a chat window. Both clients would download an applet and use a reflection server to traverse NAT.. etc. The application would call out to the server, and the response would encapsulate a command to activate code. Newer versions of this include evolutions that bear on technologies like SOAP, Browser PUSH, etc.

Microsoft needed to release their own ability to give RPC access to COM libraries. Hence, the hurried release of ActiveX. However, in the process, insufficient thought was given to the paradigms in which this technology would be deployed. While Sun, by defualt, had to think through the concerns to make their JVM portable to different systems, Microsoft's ActiveX simply had the task of linking the RPC functionality to COM applications.

The problem was introduced in the following paradigm: malware. Now, using the relatively unsecure ActiveX component, a site could pass an object which would call for the reformat of hard drives, for example, and claim that it was actually graphics enhancement or something else benign. As long as the user running the ActiveX has sufficient privileges to execute the code, they could run the program and viola! No files. Big problem.

Enter CAS. With the release of .NET 2, Microsoft produced Code Access Security which adds a layer of protection.

HOW IT WORKS
When you execute code, your machine checks the assembly you are executing and sees what level of privileges are necessary to run. Every conceivable condition can be constrained. For example:

Code on a network share on HOST1 that requests access to a SQL Server, being accessed by the .NET framework running on a HOST2 can have a different security zone/attribute/setting than if the same example was running on a local instance.

A brief run-down of the process is as follows:

1. .NET looks at a particular strongly-named assembly, decrypts the SHA1 info wrapper, and reads the levels of trust requested by the assembly.
2. .NET then checks the zone that the particular assembly qualifies for. So, if it was a DLL on a network share, it would be a part of a local-intranet zone.
3 .NET looks at its local machine's rules for that zone, and makes sure that all the levels of permissions the DLL needs, are allowed for that zone.
4. .NET opens access to the assembly.

Strong-naming
So, the question then is, what is strong-naming? This is the lynch-pin for developers. In order for the developer to deploy the code into production, they need to be able to wrap the assembly in a strong-name that the executing .NET environment can evaluate. This includes a key-pair encryption to centralize the control of the build.

A strong naming process essentially:

1. Reads the contents using reflection and checks for the levels of CAS necessary to execute to completion on all forks and levels.
2. Encrypts the assembly with the key provided.

What problem you usually run into in development:
You are going along, minding your own business, and all of the sudden, code that should work and function perfectly in Visual Studio, suddenly says stuff like:
Unknown tag
or
Cannot load assembly

While a number of things, including too much coffee or too little talk radio could be the chief contributor, often it is because your code has moved around into different code zones, or that you have not strongly named your assembly.

So, the answer is to configure CAS, using caspol.exe

If you are using Visual Studio, open the vs command prompt, and the environment variables will be loaded to allow you to use this command. If not, you need to install the .NET SDK.

A couple of useful commands are:

Gaining access to code on a network share by setting it as group and setting the group access to FULL TRUST:
caspol -m -ag 1.2 -url file://\\{servername}\{sharename}\* FullTrust

Gaining access to a non-strongly named assembly on a network share:
caspol -m -ag 1.2 -hash SHA1 -file \\{servername}\{sharename}\{dirpath}\{filename} FullTrust

I will include more on this post as I come up with them.

Friday, October 2, 2009

x64 Architecture and Oracle.DataAccess - a rant

Though slightly off-topic, I want to take a moment to rant about something I have been trying to work out for a day now.

64/32 bit systems are a pain! Try being on a development team where some of the workstations are 32bit and others are 64. Its crazy. Beyond that, our server, W2008 is x64 as well.

Round 1. - MS. vs. Oracle - 32-bit allowances
Oracle has yet to publish a client-based, stable 64-bit driver for their ODAC/ODP.NET architecture. In order to get software designed to run under the current architecture, you have to jump through hoops, or enable processing 32-bit software on 64-arch. So, even if you have a driver that is only stable and useful in its 32-bit form, you can deprecate sections of you application to run in WOW64 mode, which allows for Works On Windows 64 mode to run the 32-bit process. DING - Round 1 to Microsoft.

Round 2. - Debugging Capabilities.
In order to remote debug, however, one must not run in the WOW64 mode because neither the 32 or 64 bit debuggers will attach properly and catch exceptions/breakpoints in the managed code. DING - Round 2 to Nobody.. Double-KO (Not really fair because Oracle had nothing to do with this, but hey, its always fun to KO another big guy)

Round 3. - AMD64/IA64/I64
Um. Ding.. No idea.

Round 4. - DLL / GAC / REGISTRY....
So, I tried to find a 64 bit oracle.dataaccess driver to use on Server 2008, and it is rather difficult. Their newest 11g pieces don't cut it, as they fail at the very beginning of installation. I also tried downloading various packages, but instead of including the files, the installers segment everything into JAVA jar files to be built on installation.. catch-22. Finally I found a post on the Oracle Forums where a guy had installed the Oracle Server and realized that it had installed the ODP.NET driver in good AMD64 fashion. Thus, the solution was to install the Oracle Database Server, with everything possible shut off in the installation, to get the DLL, to use in the VB project. Once that DLL was in the project and referenced correctly, all was set and well.
DING round 4 goes to Microsoft by default (the judges rule that even though Oracle produced a product that functions, the fact that it is a quest of napoleonic proportions to obtain, means that it defaults to MS.)


Thursday, August 27, 2009

Development Strategies - Assembly Positioning

Application frameworks like DotNetNuke can really stretch the conceptual basis for development within Visual Studio. Solutions, Projects, References, and other modeling components must be considered differently within the strategy.

GAC vs. DNN (Copy Local)
When developing in VS, if there is a bit of functionality that you would like to use, but do not have yet, you may go directly into the open-source commmunity to find the widget. However, this widget might have an installer, which puts the assemblies into the global cache (GAC). When you reference that code in your project and run your builds, you will likely not notice the references made to the GAC until you try to deploy your project onto a production server. When you do, your application may not be able to locate the resource. This leads to a very interesting discussion question and ensuing discussion -- When I build my projects, how do I make them fully portable with solutions referencing my local cache?

If you have run into this problem before, you might be thinking that the solution is very simple. Simply right-click on the reference in studio and set to "copy local." If configured correctly, when the VBE.exe command is run in studio (this happens when you Build, Rebuild) there will be a switch added to copy the codde out of the global cache into the local project to make it fully portable. Solved!

However, in the DNN world, all projects are developed within and then deployed into a framework which is a layer up from .NET. The signficance of this, from a development perspective, is more fully understood when we look at trying to have projects reference each other.

For example, if you wanted to add Functionality: F to the portal and have the Widget: W access it, the process would go like this:
  1. Develop and deploy F into the portal.
  2. Develop and deploy W, with a reference to F into the portal.
However, problem arises when W is being developed. Unless W and F are in a state of total solidarity, where it is known fully that no environmental factors that affect W will affect it apart from F, and vice versa, one cannot develop in the normative Visual Studio Strategy.

In the normal VS strategy, one would create a Solution, and add a new project F. Then W would be added and in W's reference, the F project would be referenced. Then upon deployment, both packages would have no problem communicating via their complied assemblies.

However, say you were working on a team of people who are working to establish a level of functionality beyond the scope of default portal modules, and you installed 5 different modules, all contributing to an overall strategy, but independent in their scope. You would either need to make everything one hugh solution, and limit yourself to working, one team member at a time, or you would need to make sure that if W references F, that the reference will transfer well.

So, all that mumbo jumbo stated, the basic reality comes down to solid practices of Assembly positioning:

DNN Site BIN
It is thus, crucially important that all functionality of modules contribute their code to the DNN root bin. I have a post that describes how to set up your modules to do this. This will make sure that at least the current portal you are working in is able to complete references appropriately.

Wednesday, August 26, 2009

Remote Debugging Team Environment ABSTRACT

Remote Debugging ends up being very useful in the Module Development Process, especially if you are not going to be doing the development against the IIS on your own machine.

In our environment, we have built a 3-tier development lifecycle strategy. Our DNN portal is being used as a strong application platform for a SOA-based integration. We are leveraging the DNN architecture as an application framework against Oracle Applications, in particular PeopleSoft Contributor Relations.

3-Tier Strategy
We develop against multiple instances of DNN, and then move our development projects into a test environment, before finally making the step into production. The test environment is refreshed by production very often, so that the upgrade path to test can very easily be duplicated into production. The sandboxes are refreshed on a as-needed basis.

Remote Debugging
Most, if not all of the debugging occurs in the sandboxed environments. Each developer on our team is assigned a sandbox and a database for that sandbox. All of the sandboxes for development are on the same 2008 server running under a different user for each one.

Microsoft packages a remote debugger with the professional version of Visual Studio 2008. You can copy this installation directory to the server and run it, exposes the processes of the server to the VS client. There are good tutorials on how to do this, and I won't get into details here.

Once the remote debugger is up, one simply opens the Tools > Attach to Process option in Visual Studio, and gives the correct servername. You will see w3wp.exe in the process list (if not, the site has not been used in a while, and your worker process has timed out. Just browse to the portal in your favorite interent browser and refresh the list in Visual Studio.) If you have set your Virtual Directories to run their applications (app pools) as different users, you will notice that the one that correlates to your application pool. Attach to that process and you will be set.

Once you have attached, you are into the debug. You should be able to open up any file location that will allow you to attach a valid breakpoint, and step into the process. You can also access other options such as exceptions, SQL and T-SQL debugging, Silverlight, and more.

Happy debugging..

Oh, and when you are done.. detach from the process.. people get frustrated when their code doesn't run..

Tuesday, August 25, 2009

Developing 5.x.x Modules in VS 2008 in Team Environment.

Module development can be very challenging, but with a little effort, it can be very easily simplified. But, when we talk about team development, things change a bit. Here is the scenario:

Visual Studio 2008 (VS08)
3 Member Team
3 Sandboxed Development Portals (sbox1,2,3)

*This article is not going to go into the depths of configuration of IIS, Sql Server, Visual Studio, etc. Feel free to ask questions about the best practice for configuring either.

What we are trying to accomplish is the development of a module, within an existing sandbox, without dependencies on that sandbox. Basically I should be able to check out a module from VSS and develop in my own instance of VS, check the module back in, and let someone else do the same.

Development Tiers
Our development setup has three sandboxes, each with their own database and virtual directory, thus totally isolated. From those development sandboxes, we want to push modules into a test environment, and then into production. Periodically, we will refresh the development sandboxes with the latest production environment. Thus, retention of code in VSS and positioning in the sandboxes is crucial.

Networking
In order to ensure strong flexibility, we are doing our development across network shares. This further complciates things, because Visual Studio's access of the file system is based on the .NET trust center.

Steps
  1. Make sure that you have all the components described in the above list, installed and updated. This is crucial.
  2. Create the File Share - Assuming you are using IIS6+ on the development server (Server 2003/8), create a file share, and set the permissions with at least read/write level privelege for the users/group that will be developing against the server. This file share should be pointed at the root of the sandboxes. In our case, we have "c:\inetpub\wwwroot" shared. There are three directories in the next level up on the tree, one for each sandbox. So the UNC path looks like this "\\webdevservername\wwwroot" in the format of \\{servername}\{sharename}
  3. Install the DNN Install Code into each of the sandboxed roots. Set IIS VDs to point to each sandbox root, and work your way through the installation process.
  4. Set Trust Level priveleges - Visual Studio will not trust your fileshare by default. There are some pretty amazing things that you can do with GPO in a domain environment to automatically address this, but the sake of keeping this individualized, make sure that you can manage your own trust level.

    Open the Visual Studio Command Prompt - Start>Programs>Visual Studio 2008>Visual Studio Tools>Visual Studio Command Prompt
    Issue the following command with the information filled in correctly.

    caspol -m -ag 1.2 -url file://\\{servername}\{sharename}\* FullTrust


    If executed correctly, this should keep you from running aground on Trust Issues during the rest of the process.
    *This step will not work if you have not installed the .NET SDK
  5. Create a new DNN module - Open Visual Studio and create a new project. Under Web choose DotNetNuke Compiled Module
  6. Save the project. Make sure you save into your source control! When you save the project, save the project into the shared folder location: \\{sharename}\{siteroot}\DesktopModules.


    Edit references
  7. By default, when you create a new module, it will be referencing a non-existent DotNetNuke installation. You will need to delete the current instance.
    a. In the Solution Explorer, click the button for Show all Files.
    b. Expand References and delete the DotNetNuke Reference.
    c. Right-click on References, and Add a Reference.
    d. Under the Browse section, navigate back to the root install location, and into the bin folder.
    e. Select DotNetNuke.dll
    f. Click OK
  8. Right Click on the Project Name in the Solution Explorer, and select Unload Project
  9. Right Click on the Project Name in the Solution Explorer, and select Edit {Project}.vbproj
  10. Find section with the following:

    <Reference Include="DotNetNuke, Version=5.1.1.98, Culture=neutral, processorArchitecture=MSIL"/>

    and fix it so it contains a HINT. Mine looks like this:

    <HintPath>..\..\bin\DotNetNuke.dll <HintPath>

  11. Right Click on the Project Name in the Solution Explorer, and select Reload Project

    Set Build Info
  12. Right Click on the Project Name in the Solution Explorer, and select Properties
  13. On Application tab, make sure that the Root Namespace input is empty.
  14. On the Compile tab, make sure the Build output path: is set to "..\..\bin\"
That's it! You should be able to save the project now, and take it in and out of source control, while building within the portal...

When you build, the module should be able to put the DLL into the DNN Bin folder, which should give you the ability to make changes, rebuild, and view your module's changes in whichever sandbox you saved it into.

In order to traverse the mulitple sandboxes, you simply need check the source back in, and when you get the latest version, set the working directory to be the DesktopModules folder in whichever portal.