Showing posts with label Windows. Show all posts
Showing posts with label Windows. Show all posts

Thursday, October 30, 2014

Remote Management with PowerShell (Part 1)

Introduction

The previous article in this series explored Active Directory Domain Services management with PowerShell. Now we will examine the remoting features in PowerShell 4.0 and explore the protocols, services, and configurations needed for remoting to function. There will be demonstrations to highlight how remoting works by getting information, creating objects, changing settings, and assigning user permissions to a group of computers remotely.
Advertisement

Windows PowerShell Remoting

Windows PowerShell remoting provides a method to transmit any command to a remote computer for local execution. The commands do not have to be available on the computer that originates the connection; it is enough if just the remote computers are able to execute the commands.
Windows PowerShell remoting relies on the web services managements (WS-Man) protocol. WS-Management is a distributed management task force (DMTF) open standard that depends on HTTP (or HTTPS) protocol. The Windows Remote Management (WinRM) service is the Microsoft implementation of WS-Management, WinRM is at the heart of Windows PowerShell remoting but this service can also be used by other non-PowerShell applications.
By default, WS-Man and PowerShell remoting use port 5985 and 5986 for connections over HTTP and HTTPS, respectively. This is much friendlier to network firewalls when compared to other legacy communication protocols such as the distributed component object model (DCOM) and remote procedure call (RPC), which use numerous ports and dynamic port mappings.
Remoting is enabled by default on Windows Server 2012 and it is required by the server manager console to communicate with other Windows servers, and even to connect to the local computer where the console is running. On client operating systems, such as Windows 7 or Windows 8, remoting is not enabled by default.
Once enabled, remoting registers at least one listener. Each listener accepts incoming traffic through either HTTP or HTTPS; listeners can be bound to one or multiple IP addresses. Incoming traffic specifies the intended destination or endpoint. These endpoints are also known as session configurations.
When traffic is directed to an endpoint, WinRM starts the PowerShell engine, hands off the incoming traffic, and waits for PowerShell to complete its task. PowerShell will then pass the results to WinRM, and WinRM handles the transmission of that data back to the computer that originated the commands.
While this article concentrates on the remoting feature of Windows PowerShell, it is worth noting that there are other remote connectivity protocols that are also used by specific PowerShell cmdlets. For instance, some cmdlets use the RPC protocol, others depend on the remote registry service. These numerous communication protocols demand additional configuration on the firewall to allow those PowerShell commands to be executed across the network.

Enabling PowerShell Remoting on a Local Computer

You may need to enable remoting on Windows clients, older Windows Server operating systems, or Windows Server 2012 if it has been disabled. However, keep in mind that remoting must be enabled only on computers that you will connect to; no configuration is needed on the computer from which you are sending the commands.
To manually enable remoting, run the Enable-PSremoting cmdlet as shown below:
Image
Figure 1
Running the Enable-PSremoting cmdlet makes the following changes to the computer:
  • Sets the WinRM service to start automatically and restart it.
  • Registers the default endpoints (session configurations) for use by Windows PowerShell.
  • Creates an HTTP listener on port 5985 for all local IP addresses.
  • Creates an exception in the Windows Firewall for incoming TCP traffic on port 5985.
If one or more network adapters in a computer are set to public (as an alternative to work or domain), you must use the –SkipNetworkProfileCheck parameter for the Enable-PSremotingcmdlet to succeed.
Running Get-PSSessionConfiguration exposes the endpoints created by Enable-PSremoting.
Image
Figure 2

Enabling PowerShell Remoting Using Group Policy

If you have a large number of computers, configuring a group policy object (GPO) may be a better option to enable remoting than manually executing the Enable-PSremoting cmdlet in each system.
The order is not important, but the following three steps must be completed for the GPO to trickle down effectively and enable remoting on your domain computers:
  • Create a Windows firewall exception for the WinRM service on TCP port 5985
  • Allow the WinRM service to automatically listen for HTTP requests
  • Set the WinRM Service to start automatically

Create a windows firewall exception for the WinRM service on TCP port 5985

  1. To create the firewall exception, use the Group Policy Management Console and navigate to Computer Configuration\Administrative Templates\Network\Network Connections \Windows Firewall\Domain Profile.
Image
Figure 3
  1. Right-click the Windows Firewall: Define inbound program exceptions and select Edit.
Image
Figure 4
  1. Click on Show and on the Show Contents dialog box; under Value enter the following line:5985:TCP:*:Enabled:WinRM as seen below:
Image
Figure 5

Allow the WinRM service to automatically listen for HTTP requests

  1. Again using Group Policy Management, that setting can be located under Computer Configuration\Administrative Templates\Windows Components\Windows Remote Management (WinRM)\WinRM Service.
Image
Figure 6
  1. Right-click Allow remote server management through WinRM and select Edit. Click on Enabled and specify the IPv4 and IPv6 filters, which define which IP addresses listeners will be configured on. You can enter the * wildcard to indicate all IP addresses.
Image
Figure 7
Set the WinRM Service to start automatically
  1. This setting can be found on Computer Configuration\Windows Settings\Security Settings\System Services\Windows Remote management (WS-Management).
Image
Figure 8
  1. Right-click Windows Remote management (WS-Management), select Properties and set the startup mode to “Automatic.”
Image
Figure 9
Once all the preceding GPO settings are completed and the group policy is applied, your domain computers within the policy scope will be ready to accept incoming PowerShell remoting connections.

Using Remoting

There are two common options for approaching remoting with PowerShell. The first is known as one-to-one remoting, in which you make a single remote connection and a prompt is displayed on the screen where you can enter the commands that are executed on the remote computer. On the surface, this connection looks like an SSH or telnet session, even though it is a very different technology under the hood. The second option is called one-to-many remoting and it is especially suited for situations when you may want to run the same commands or scripts in parallel to several remote computers.

One-to-One Remoting (1:1)

The Enter-PSSession cmdlet is used to start a one-to-one remoting session. After you execute the command, the Windows PowerShell prompt changes to indicate the name of the computer that you are connected to. See figure below.
Image
Figure 10
During this one-to-one session, the commands you enter on the session prompt are transported to the remote computer for execution. The commands’ output is serialized into XML format and transmitted back to your computer, which then deserializes the XML data into objects and carries them into the Windows PowerShell pipeline. At the session prompt, you are not limited to just entering commands, you can run scripts, import PowerShell modules, or add PSSnapins that are registered to the remote computer.
There are some caveats on this remoting feature that you need to be aware of. By default, WinRM only allows remote connections to the actual computer name; IP addresses or DNS aliases will fail. PowerShell does not load profile scripts on the remote computer; to run other PowerShell scripts; the execution policy on the remote computer must be set to allow it. If you use the Enter-PSSession cmdlet in a script, the script would run on the local machine to make the connection, but none of the script commands would be executed remotely because they were not entered interactively in the session prompt,

One-to-Many Remoting

With one-to-many remoting, you can send a single command or script to multiple computers at the same time. The commands are transported and executed on the remote computers, and each computer serializes the results into XML format before sending them back to your computer. Your computer deserializes the XML output into objects and moves them to the pipeline in the current PowerShell session.
The Invoke-Command cmdlet is used to execute one-to-many remoting connections. The -ComputerName parameter of the Invoke-Command accepts an array of names (strings); it can also receive the names from a file or get them from another source. For instance:
A comma-separated list of computers:
-ComputerName FS1,CoreG2,Server1
Reads names from a text file named servers.txt:
-ComputerName (Get-Content C:\Servers.txt)
Reads a CSV file named Comp.csv that has a computer column with computer names.
-ComputerName (Import-CSV C:\Comp.csv | Select –Expand Computer)
Queries Active Directory for computer objects
-ComputerName (Get-ADComputer –filter * | Select –Expand Name)
Here is an example of using remoting to obtain the MAC addresses of a group of computers:
<code>
 Invoke-Command -ComputerName FS1,CoreG2,Server1 -ScriptBlock `
{Get-NetAdapter |Select-Object -Property SystemName,Name,MacAddress |
Format-Table}
</code>
Here is the output:
Image
Figure 11
Here is another example: Let’s say that you need to create a folder on each computer to store drivers and, at the same time, you want to assign full control permission to a domain user, named User1, to access the folder. Here is one way you could code the solution:
<code>
Invoke-Command -ComputerName Fs1,CoreG2,Server1,Win81A `
-ScriptBlock {New-Item -ItemType Directory -Path c:\Drivers
$acl = Get-Acl c:\Drivers
$User1P = "lanztek\User1","FullControl","Allow"
$user1A =New-Object System.Security.AccessControl.FileSystemAccessRule $User1P
$acl.SetAccessRule($User1A)
$acl | set-acl c:\Drivers}
</code>
The preceding script may be run from any accessible computer in the network. It creates a folder named “Drivers” on the root of the C drive on each one of the computers that it touches.
The $aclvariable stores the security descriptor of the Drivers folders; $User1P defines the permission level for User1 (full control). The $User1A variable holds a new object that defines an access rule for a file or directory. $User1A is used to modify the security descriptor ($acl). The last line of the script pipes the modified security descriptor ($acl) to the Set-Acl cmdlet. Finally, the Set-Acl cmdlet applies the security descriptor to the Drivers folder.
Once the scripts executes, you get immediate confirmation that the folder has been created on each one of the remote computers.
Image
Figure 12
One-to-many remoting can be used again to verify that User1 has full control permission to the Drivers folder:
<code>
Invoke-Command -ComputerName Fs1,CoreG2,Server1,Win81A `
-ScriptBlock {get-acl c:\drivers |
Select-Object PSComputername,AccessToString}
</code>
Image
Figure 13
By default, remoting connects up to 32 computers at the same time. If you include more than 32 computers, PowerShell starts working with the first 32 and queues the remaining ones. As computers from the first batch complete their tasks, the others are pulled from the queue for processing. It is possible to use the Invoke-Command cmdlet with the -ThrottleLimit parameter to increase or decrease that number.

Persistent PSSessions

When using Invoke-Command with the –ComputerName parameter, the remote computer creates a new instance of PowerShell to run your commands or scripts, sends the results back to you, and then closes the session. Each time Invoke-Command runs, even if it does to the same computers, a new session is created and any work done by a previous session will not be available in memory to the new connection. The same can be said when you use the Enter-PSSession with the –ComputerName parameter and then exit the connection by closing the console or using the Exit-PSSession command.
It is good to know that PowerShell has the capability to establish persistent connections (PSSessions) by using the New-PSSession cmdlet. The New-PSSession allows you to launch a connection to one or more remote computers and starts an instance of Windows PowerShell on every target computer. Then you run Enter-PSSession or Invoke Command with their –Session parameter to use the existing PSSession instead of starting a new session. Now you can execute commands on the remote computer and exit the session without killing the connection. Superb!
In the following example, the New-PSSession cmdlet is used to create four different PSSessions; the PSSessions are stored in a variable names $Servers. Get-Content reads the computer names from a text file named Servers.txt and pass that information to New-PSSession via the -ComputerName parameter.
<code>
$Servers = New-PSSession -ComputerName (Get-Content c:\Servers.txt)
</code>
After running the command, typing $Servers or Get-PSSession will allow you to confirm that the sessions have been created.
Image
Figure 14

Closing Remarks

Remoting is a firewall-friendly feature that relies on the WS-Management (WS-Man) open standard protocol to function. Microsoft implements and manages WS-Man via the WinRM service. This article shows how to administer from a small to a large number of computers with Windows PowerShell remoting using interactive sessions, one-to-one, one-to-many, and persistent PSSessions. But wait, there’s more. We have not talked yet about multihop remoting, implicit remoting, managing non-domain computers, or PowerShell web access. Those and other topics will be explained and demonstrated in our next article in this series.
If you would like to be notified when Wilfredo Lanz releases the next part of this article series please sign up to the WindowsNetworking.com Real time article update newsletter.
If you would like to read the previous part in this article series please go to Using PowerShell to Manage AD and AD Users.

Wednesday, October 29, 2014

Microsoft reveals ramped-up security offerings for Windows 10 and Office 365 across multiple devices

Microsoft kicked off its TechEd Europe conference in Barcelona today by joining up the dots with its security plans for Windows 10, as well as offerings for Microsoft services on mobile devices in the nearer future.
Joe Belfiore, corporate vice president of PC, tablet and phone, explained how Windows 10 will "significantly improve system protection against modern security threats".
Further reading
Windows 10 "will enable you to secure the device and the code that's running on any device you deploy" as well as "give you some terrific tools to satisfy end users" and to "protect user identities against all the types of identity theft we're hearing about," Belfiore told delegates.
"In Windows 10, you'll be in control of any of the code that's authorised to run on any device. This way of securing the device means that - by policy - you decide that only signed code runs - that you've signed, or the OEM, or even only Microsoft-signed code," he explained.

Belfiore showed the user experience as including only authorised apps on a custom-built menu. "Default actions" he pointed out, are still simple to carry out, but non-default actions are embedded into the experience without confusion or obstruction.
For example, when a user was about to paste sensitive information from a corporate document into Twitter, it was immediately disallowed.
But the policy could be further customised to instead flash up a message - and invitation to provide a reason - that information from a secure document was posting to Twitter and that the IT department would be informed.
"Because the platform is the same across devices," explained Belfiore, "this works across all of them."
Belfiore also showed two-factor authentication using a phone as the second factor for login, which he reminded delegates was "inexpensive for IT managers" as well as no longer reliant on a password "stored on some server".
The security conversation didn't stop here, as enhanced features for managing mobile devices were then unveiled - beyond devices using just the Windows operating system, and into devices - such as the iPad - just running Microsoft software such as Office 365.
Julia White, general manager of Office 365, demonstrated touch-controlled MDM [mobile device management] of Office 365 on an iPad. App-wrapping will also be featured, as well as secure mobile apps.
"Users want access to all their information, everywhere," said White, explaining that, especially since the launch of Office on iPad, full control via this medium has been one of the most requested functions.
Android will also be included at rollout.
The Office 365 management functions, and the SDK to accompany it, are expected in the first quarter of 2015, while the Windows 10 features will obviously roll out with the OS - though whether they'll all arrive on launch day remains open to speculation.

Win10: Enough to Convince You it’s Time for a Client Upgrade?

Microsoft had high hopes for Windows 8, but those expectations haven’t quite panned out as planned. As of September, according to NetMarketShare.com statistics, Windows 8/8.1 only had a combined total of 12.26 percent, considerably less than twelve-year-old Windows XP (which garnered 23.87 percent) and far less than its immediate predecessor, Windows 7, which still has more than half of the desktop market share (52.71 percent). This is despite the fact that Windows 8 has been available for almost two years at the time of this writing.
In addition, a large proportion of those machines that are running Windows 8 and 8.1 are consumers’ computers that were bought this past year with the new version of Windows already installed. The majority of businesses have resisted upgrading, and there are several different reasons given for this. One is simple and has nothing to do with the merits of the operating system itself: many companies have adopted an every-other-version OS upgrade policy, and many others tend to avoid upgrading client operating systems, especially, until the one they’re currently running is out of support. Thus we saw many businesses that didn’t move from Windows XP to Windows 7 until Microsoft dropped support for the former in April of this year.
It makes sense from a bottom-line point of view. The cost of upgrading several hundred or several thousand machines to a new OS is significant and includes not just the licenses but in some cases hardware upgrades that are required to run the new operating system, as well as a great deal of administrative overhead, lost productivity as users navigate the inevitable learning curve, and the extra burden on help desk/tech support personnel dealing with user queries and troubleshooting the problems they get themselves into until they become more familiar with the new way of doing things.
Therefore, unless there is a compelling reason to upgrade – such as a real killer feature that will greatly enhance the user experience or important new security mechanisms – companies frequently opt to “sit this one out” when a new OS comes out. Even if they’re considering rolling it out, many will wait a year or more to allow for someone else to find the bugs and for the vendor to fix them.
Of course, this isn’t the only reason businesses don’t upgrade each time Microsoft issues a new release of Windows. There is also at least a perception that for a long time, every other version of the OS has been a failure, with Microsoft coming out with something drastically new and not very well implemented, then listening to consumer feedback and refining it in the next version. Windows XP was well-liked by most users after they got acquainted with it (although I can well remember the hue and cry when it first came out, mostly about its “bubble gum looking” interface). Vista was disparaged as a big flop, thanks to its resource-hogging behavior that made it run like a slow pig on less powerful machines and its in-your-face implementation of User Account Control.

Windows 7 addressed both of those complaints, and more, and was pretty well accepted by both individual users and the enterprise world. Then along came Windows 8 and upset the apple cart again. By taking away the Start button and Start menu that had been the primary basis of navigation since Windows 95, Microsoft invoked the ire and ridicule of a large percentage of its user base.
Yes, the new tiled interface worked great with tablets and touch screens, but unfortunately most business users and many home users were still working with traditional desktop machines, and the mouse/keyboard experience on Windows 8 left a lot to be desired in the eyes of most of those users. Yes, there are third party utilities – both paid and free – that can be installed to restore the Start button and menu, but many consumers weren’t aware of them and many of the more tech-savvy were annoyed at having to install an add-on to gain back the functionality that was once included in Windows out of the box.
Windows 8.1 was released close to one year after Windows 8, and was billed as a major update (i.e., more than a service pack but less than a version upgrade). It added back the Start button, but in an unsatisfying form, as the button only takes you to the hated (by desktop users) Start screen rather than producing the Start menu for which everyone was clamoring.  Since it’s a free upgrade, most of those who were running Windows 8 installed it, but very few of those who were running Windows 7 saw enough of an improvement to make them decide to make the move.
On September 30th, Microsoft held an event in San Francisco, aimed primarily at enterprise customers, to introduce the next real version upgrade, which they’re calling Windows 10. Some have speculated that the reason for skipping number 9 was to put more distance between the not-very-popular Windows 8 and the next iteration, formerly known by its code name Threshold. They also made a technical preview available for public download.
Immediately, most of the tech press rejoiced. The Start menu is back, albeit in a new “Modernized” format that combines the old favorite apps and search box with a panel of Modern UI tiles that can be customized. This makes life much easier for the many desktop users who felt lost without the menu (although most of us power users had long since installed Start 8 or Classic Shell and gone about our business).
The Start menu isn’t the only enhancement in Windows 10, but it’s the one getting most of the attention. Reviews from those testing the new OS have mostly been at least cautiously optimistic. I’ve been working with it since the day after it was released and so far, I like what I see. I’ll be doing a fuller review article for WindowsNetworking.com in the near future. Meanwhile, the big question is whether there’s enough there to persuade companies that it’s time to let go of Windows 7 and take the upgrade plunge this time, when Win 10 becomes generally available sometime around the middle of next year.  Write and tell us what you think. 

Saturday, October 25, 2014

How to create an amazing multi-monitor setup

The ability to expand your available desktop space beyond one screen doesn't stop at standard monitors. There's a host of tricks and hacks that you can use to expand your Windows desktop across more displays than you ever thought possible, from phones and tablets to other systems connected on a network, and even displays you've created yourself.
You can carry on as far as you want, until an immense, unblinking compound eye of monitors is staring back at you.
Your first port of call is a networked display, partly because our hoarding nature means we tend to have spare laptops and desktop systems skulking under desks like mutated cockroaches.
MaxiVista screenshot

The idea of a network display isn't entirely new - projector installations have made use of them for years - but the idea of running one as a standard extended display is a novel one.
Commercial products for this already exist - takeMaxiVista for example. It's only $40, but supports both Mac and Windows platforms, and pretty much every connection under the sun from network to USB 3.0.

Zoning out

For a little more grey-matter bending, we're going to try a free option called ZoneScreen from www.zoneos.com.
This uses a little trick that's employed by many screen cloning tools: using a virtual display driver to create a virtual Windows screen.
With a special build of the VNC client, it's then possible to duplicate this virtual extended Windows screen over a network onto another system.
If your mind is running ahead then you can probably guess the main downside of this setup: it makes you feel a little drunk, as only a limited number of frames per second can be encoded and pumped over the network. While it's fast enough for browsing the web and other office jobs, it's far from ideal for video.
What it does provide is flexible and adaptable extra monitor capability that just happens to be cross-platform. You can sit at your desktop with a laptop on a stand and enjoy an extended desktop experience.
If you need to move elsewhere, just pick up the laptop, switch off VNC and it's ready to go.
It can help you make the most of the equipment you have available, and a mid-range laptop won't draw much more power than a decent sized LCD display.
The network aspect provides an elegant extension of your desktop space, and while it's not practical to use a laptop's display on its own, by reusing the whole laptop you can engineer a standalone display that's a project in itself.

Portable pads

The technology behind networked displays isn't limited to laptops - you can also extend your desktop to smaller portable devices like phones and tablets.
There's a range of apps available for iOS and Android devices. Simply install the software, then sit down at your desk and enjoy an extended desktop on your docked tablet, either wirelessly or via the USB cable.
Instead of a standard VNC client, these tools use a bundled application that requires its own server. This provides a more finely tuned experience, with additional touch features.
For iOS, Air Display is often lauded as the one to choose, but at $10 (approximately £7) it's one of the more expensive options.
ScreenSlider screenshot

For Android, our app of choice isScreenSlider. It's available in regular and Pro versions, the latter of which adds features like touch controls. This app is pretty quick on screen updates.
We also recommend iDisplay, which offers good PC and Mac support alongside Android and iOS options, so you can cover all of your mobile devices with a single server-side service.
One additional trick these systems can offer is mirroring of your system's main screen, so you can let someone see what you're doing on-screen or wander the house while controlling your main desktop.

Old school

Old displays are another potential source of screen space. If you live like Steptoe, it's possible that you could have the odd green or black and white display still lying around.
While there's no guarantee it won't simply go pop when you power it up, if you can solder, it's possible to connect these up.
Their inputs are generally based on a basic composite signal, with a combined analogue Hsync, Vsync and video signal. This can be recreated using a standard TV out, or built from a standard VGA output.
For reference, the pinouts of a standard VGA port are for the red, green and blue pins to be 1, 2 and 3. These have their own grounds on pins 6, 7 and 8, and a generic ground is on pin 5. The sync lines are on pins 13 and 14 with their unified ground on pin 10.
You may find you need to tape out the signal and ground lines from the display yourself. This might be a little too advanced, as it would require you to dismantle the display.
We've seen lots of interesting ways to embed old black and white monitors into systems, and entire PCs built into larger displays. It all gets a bit DIY at this point, but if you're after a novel solutions there are some inspirational projects that make use of obsolete displays.
Finally, there are phone and tablet application extensions. These are less direct solutions that provide novel and alternative ways to monitor and control your desktop PC.
Apps like uTorrent let you monitor and control your favourite torrent program at your desk and away.
Remote Desktop Connection Client lets you control your Windows desktop from a Mac, and 2X Client offers Remote Desktop viewing.

Windows WebDAV client : Cache timeout settings

Normally this may not cause any issues, but files are a special case because SharePoint exposes its files with the WebDAV protocol.

Since I rarely use Mac systems, I'm not sure if or how their WebDAV integration works, but Windows provides a service called the Web Client (avaialble on any client OS, and on server OS's when the 'desktop experience' feature is enabled). And for performance sake, the Web Client will perform caching... not much... by default the cache timeout is 60 seconds (on Windows 7)... but when the code is going to modify the files' contents IMMEDIATELY, even that 60 second window is sometimes noticed.

Thankfully, the timeout can be changed rather easily with a quick registry change.
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MRxDAV\Parameters\FileNotFoundCacheLifeTimeInSec

Change the timeout cache value for WebDAV

The value of the WebDAV timeout cache in Windows® 7 is 60 seconds. In order to change the time out cache value, you will need to modify theHKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\MRxDAV\Parameters\FileNotFoundCacheLifeTimeInSec registry key. This action is necessary because there is no mechanism in place that will flush the cache on demand.

Modifying the registry key

  1. At the command prompt, run the Regedit command. This opens the Registry Editor.
  2. Locate and then double click the following registry key:HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\services\MRxDAV\Parameters\FileNotFoundCacheLifeTimeInSec.
  3. In the Edit DWORD (32-bit) Value dialog box, change the value in the Value data: text box to your desired value and click OK. The value of the WebDAV timeout cache has now been changed.

IIS7 File Upload Size Limits

By default, IIS7 limits file upload to 30MB.  Oddly, it returns a 404 error if someone uploads something larger than 30MB.   The docs from Microsoft are a little confusing on this, so I thought I would try to clarify.
According to the following article, you can "Remove the maxAllowedContentLengthproperty." from the applicationhost.config file in IIS7 to lift the 30MB limit.  However, in my case, this property was never in the file to begin with.
So, my assumption on this is that the 30MB limit is somewhere internal to IIS7.  The article also doesn't say where to ADD the entry requestLimits node if it isn't already there.
Luckily, there is an alternate solution that can be enabled at the site level rather than server-wide.
<system.webServer>
        <security>
            <requestFiltering>
                <requestLimits maxAllowedContentLength="524288000"/>
            </requestFiltering>
        </security>
</system.webServer>
If you add the above code to the web.config file for your site, you can control the maximum upload size for your site.  In many cases, the system.webServer node will already be in the file, so just add the security node within that.
Note that the maxAllowedContentLength is in BYTES not kilobytes.
You may also need to restart your Web site (not the whole server) to enable the setting.
In case you are curious, why would I want people to upload files larger than 30MB?  We were working on a video conversion script that allows people to upload large MOV files and converts them to FLV.

Uploading Large Files to IIS / ASP.NET

Max Upload File Size in IIS and ASP.NET

While IT Hit WebDAV server engine can process files of any size (up to 8,589,934,592 Gb) the hosting environment or you WebDAV client may not support large files upload.
If you host your WebDAV server in IIS/ASP.NET you must specify the file maximum upload size in web.config of your web application. By default maximum upload size is set to 4096 KB (4 MB) by ASP.NET. To increase the upload limit add appropriate section to your web.config file and specify the limit:
In case of IIS 7.x and later, both Integrated and Classic mode:
<system.webServer>
  <security>
    <requestFiltering>
      <requestLimits maxAllowedContentLength="2147483648" />
    </requestFiltering>
  </security>
</system.webServer>
In case of IIS 6.0:
<system.web>
  <httpRuntime maxRequestLength="2097151" />
</system.web>
Important! The maximum file upload segment size for both ASP.NET 2.0 and ASP.NET 4.0 is 2097151Kb = 2Gb. To upload files over 2Gb you need the client application with resumable upload support.
If you need to upload files larger than 2Gb you must implement resumable upload interfaces and upload files with segments. Note that you will need the WebDAV client application that supports resumable upload in this case, such asIT Hit Ajax File Browser or WebDAV Sample Browser. They automatically detect that your server is hosted in IIS, breake file tointo 2Gb segments and upload a file segment by segment.

Upload Content Buffering in ASP.NET 2.0

The file upload is performed differently in ASP.NET 4.0-based application, HttpListener-based application and in ASP.NET 2.0-based application. While ASP.NET 4.0 and HttpListener passes file content directly to the engine, the ASP.NET 2.0 first saves file content in a temporary folder limiting upload capabilities and increasing server load. To avoid upload buffering in ASP.NET 2.0 on servers side, the IT Hit WebDAV Server Engine providesITHitPutUploadProgressAndResumeModule that also significantly improves upload speed. To use the module in your web application add it to modules section in web.config:
In case of IIS 7.x Integrated mode:
<system.webServer>
  <modules>
    <add name="ITHitPutUploadProgressAndResumeModule"type="ITHit.WebDAV.Server.ResumableUpload.PutUploadProgressAndResumeModule, ITHit.WebDAV.Server"preCondition="integratedMode" />
  </modules>
</system.webServer>
In case of IIS 7.x Classic mode and IIS 6.0:
<system.web>
  <httpModules>
    <add name="ITHitPutUploadProgressAndResumeModule"type="ITHit.WebDAV.Server.ResumableUpload.PutUploadProgressAndResumeModule, ITHit.WebDAV.Server" />
  </httpModules>
</system.web>
If you enable this module in ASP.NET 4.0 application it will be ignored.
Important! Always enable ITHitPutUploadProgressAndResumeModule in the following cases:
     - If you are running your application in Visual Studio Development Server (not recommended).
     - If you are implementing resumable upload interfaces and hosting your server in ASP.NET 2.0.
Important! With ITHitPutUploadProgressAndResumeModule module you must always use theDavContextBase(HttpContext) constructor. 

Upload Timeout

To prevent canceling script execution when uploading a large file to application hosted in IIS / ASP.NET you must increase script timeout value:
HttpContext.Current.Server.ScriptTimeout = 2400; // timeout in seconds
Note that if you store your data in a database, often timeout may be caused by a database connection.

Upload of large files (> 100 MB) via WebDAV on Windows 7 is failing when upload takes longer than 30 Minutes

Scenario:
Jonny again, this Time I want to tell you a little bit about the Web Client Service (WebDAV) on Windows 7 SP1. Consider the following Scenario:
You are working from your Home Office connected to your ISP for Internet access. You are finished with your work and want to upload your Data to a
IIS 7.5 WebDav Share over HTTP(S) that your co workers can access the Data. You start the upload, you see the Progress Bar moving very quick to
the end and stay's there. After 30 minutes you receive the following Error :

Error: 0x80070079 The semaphore timeout period has expired
What happend? You check if the Internet connection is broken => nop, it is not
then you check if your connection from the PC to the Router is broken => nop, it is not
in Short: you check every possible cause but everything works fine and you are going crazy, but don't give up
the Problem is:
1. The Progress Bar moves very fast but the upload takes long time is due to Web Client is copying the file to the WebDAV TFS (WebDAV Temp Files) Store on the Client
and uploads it from there to the IIS WebDAV Share.
2. the bigger issue is that you click on repeat and after 30 minutes the same error occurs
the Cause is the Timeout for upload over Web Client is by default 30 minutes.
The Solution is to modify the following Registry Key in minutes to a higher Timeout:
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\MRxDAV\Parameters
DWORD: FsCtlRequestTimeoutInSec
Ok, but now your coworker is working in teh Home office just like you, he want's to download the file from the IIS Server to his HDD on Windows 7 SP1
using the Web Client and also get's Errors like:
"Cannot Copy FileName: Cannot read from the source file or disk"
Copy Folder
An unexpected error is keeping you from copying the folder. If you continue to receive this error, you can use the error code to search for help with this problem.
Error 0x800700DF: The file size exceeds the limit allowed and cannot be saved.
<file name>
Try again Cancel
Oh men, how to fix this?
again, don't give up, the cause is a 50 MB download Limit on Windows 7 SP1 for Web Client and yes, we have a Registry Key where we can raise the Limit, Huhu :)
Just Modify the following Registry Entry in bytes:
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\WebClient\Parameters
DWORD: FileSizeLimitInBytes
Default is 50000000 bytes.
Now you are ready to upldoad and download bigger Files by using the WebDAV (Web Client Service) over HTTP(S)
Hope this helps