Monday, November 3, 2014

Ghost Hunters Kinect With Spirits

Microsoft's Kinect motion controller may have been intended to keep gamers moving and free from handheld devices while in the throes of play, but some intrepid explorers have been using it for a different purpose altogether: hunting ghosts.
Thanks to its skeletal-tracking and infrared-sensing capabilities, Kinect can "see" as many as six players in a room. What makes spines start to tingle, however, is when it seems to see more people than are physically present.
Use of the device in this way was demonstrated in the Travel Channel's paranormal investigation show Ghost Adventures, and was features in a scene in the horror film Paranormal Activity 4.
Now, YouTube is overrun with videos by Kinect users who believe their devices have spotted spirits, as a simple search on "Kinect ghosts." There's even a website dedicated to the topic, as a Polygon report recently pointed out.

Boon for Paranormal Research

"The paranormal field has always adapted technology from other industries for use in documenting phenomena," Brandon Alvis, founder and president of the American Paranormal Research Association, told TechNewsWorld.
"Kinect is yet another example of researchers using any technology available to continue research of a possible existence of life after death. I believe that Kinect is a fascinating tool that will aid in documenting ghostly phenomena," he said.
"It is very possible that this technology could take paranormal research to the next level in finding credible data for the existence of ghosts," Alvis added.

The Little Chill

"It makes sense that any group using technology to investigate ideas would adopt new tools to further their work," Christine Arrington, a senior analyst for games with IHS, told TechNewsWorld.
"Additionally, it sounds like it makes good TV," she added. "Just reading about a figure appearing next to you on the screen gives me a little chill."
The video above shows paranormal investigators using a modified Kinect camera system to capture an unexplained presence in the Mark Twain House in Hartford, Connecticut, on April 12, 2014. This video was recorded by Rick Callahan during a Legend Trips event.
Whether such figures reflect beings actually there or are simply artifacts of the technology, of course, is the question.

Not So Fast

"Ghost hunters are certainly gadget lovers, because use of a gadget like an EMF (electromagnetic field) meter or camera makes it feel like they are being objective in their data collection," said Sharon Hill, a geologist, researcher and skeptical advocate.
That allows them to believe that they are not the ones seeing the apparition -- the device is, she told TechNewsWorld.
"There are many problems with this," Hill maintained.

Logic Leap

First and foremost, the devices used by paranormal investigators measure environmental variables -- not ghosts, Hill asserted.
"They infer that changes in the environment are indicative of a ghost," she said. "They're not. They fail to control the area for other factors such as other people, drafty windows, etc."
In addition, "they never seem to do good baseline measurements," Hill added. "They just go anomaly hunting, and any anomaly equals ghost. That's a total leap in logic, and it makes no sense. They never test it -- they do not carefully document results and ask experts."
Modern conceptions of ghosts' characteristics "do not mesh with what we know of physics," Hill pointed out. "I'm more willing to rely on centuries of well-supported data on the laws of nature than throw it all out and assume there is a paranormal entity trying to interact with me. How can you rule out all the possible normal and conclude paranormal? The best you can say is, 'I don't know.'"

Ghosts Are in the Mind of the Beholder?

Moreover, when it comes to anything on TV or the Web, "we must always consider hoaxes or enhancements," Hill warned. "TV shows are edited -- we have no clue as to what was really going on. It is absurd to think anything you see on ghost TV shows constitutes evidence of any value, and any user-submitted videos are worthless as evidence, even though they might be curious."
In short, "the paranormal investigators are too quick to reach a paranormal conclusion. After all this time -- centuries of looking to prove ghosts exist -- they STILL have not done it, technology or not," she said.
"We can measure subatomic particles and identify microbes or detect characteristics from astonishingly distant objects in the universe," Hill pointed out, "but we are using gaming controllers to look for ghosts, because we still can't find them? Perhaps it's all in the mind of the ghost hunter." 

Thursday, October 30, 2014

Exchange Server 2013 Backup and Restore 101 - Recovering individual items (Part 1)

Introduction

In this article series, we are going to go over the process to restore data using Exchange Server 2013 built-in capabilities. We are going to start from the simplest assumption when an administrator has to restore a single message and from that point on, we will move forward with the restore capabilities of Exchange Server 2013 for several items, such as individual items, disabled mailboxes, deleted mailboxes, mailbox databases and finally server recovery.
All articles of this series are based on Exchange Server 2013 Service Pack 1.
Advertisement

Recovering Deleted items – An Introduction

The first stop of this article series is how we can retrieve single messages from a mailbox using the built-in tools. The Mailbox Database is the main component where we define how long any deleted item will stay in the Database and from there an easy restore process can be initiated from either side: end-user or administrator.
In order to identify how many days a message will stay on any given mailbox is controlled by the attribute Keep deleted items for (days) and that can be found on the properties of a Mailbox Database and by default the value is 14 (fourteen) days.
The steps to get there are, open Exchange Admin Center (EAC), click on servers, click on databases, and double-click on the desired database. Then, click on limits (Figure 01).
Image
Figure 01
In previous versions of Exchange that was known as Dumpster however, since Exchange Server 2010, the feature has been called Recoverable Items folder and the structure can be seen in Figure 02.
Image
Figure 02
Another concept that must be clear for the administrator is to understand the differences between delete, soft deleteand hard delete.
Using either Outlook client or Outlook Web App a user can select a message and hit delete button on the keyboard, or right-click on the message and delete (Figure 03), and that message will be removed from the current location and it will end-up in the Deleted Items folder. This operation is called delete.
Image
Figure 03
If you are using ActiveSync (iOS in this example), the Trash option will be displayed when moving items from regular folders to the Deleted Items folder, as shown in Figure 04.
Image
Figure 04
So far we moved the items from wherever they are to the Deleted Items (delete operation), and from there we can right-click on the Deleted Items folder and then click on empty (Figure 05) to perform a soft-delete operation. A soft delete operation means that the message is not visible on the client side however, it can be restored easily because all the content is located in the Deletions folder of Recoverable Items folder.
Image
Figure 05
We can soft delete a message directly from its original folder by holding Shift and hit Delete button. When we do that, a dialog box asking for confirmation will be displayed (Figure 06), when we click ok the message will be removed without stopping by your Deleted Items folder.
Image
Figure 06
If we try to delete a message from the Trash folder on iOS device we will notice that the caption changed to Delete, as shown in Figure 07.
Image
Figure 07

End-user restoring process…

Time has come when a user lost a message and the user could not find the message on the Deleted Items. We can instruct the user to right-click on the Deleted Items folder and then click on recover deleted items… item, as shown in Figure 08.
Important note:
If the message was deleted longer than the number of days defined in the Mailbox Database of that specific mailbox, we will not be able to restore such item using the current procedure.
Image
Figure 08
In the new window (Figure 09), the end-user will have all deleted messages, and the end-user can select any given message and right-click and then recover and purge options will be available. The same options are available at the right bottom corner of the same page.
Image
Figure 09
When the end-user selects recover option of a message the following dialog box (Figure 10) informing where the restored message will appear is going to be displayed, just click OK.
Image
Figure 10
All items located in the Deletions folder of the Recoverable Items folder are moved to the Purges folder when their retention period is reached or when the end-user uses the option Purge and that is considered a hard deleteoperation. In the Figure 11, we can see the dialog box that will show up when we try to purge an item from recover deleted items window.
Image
Figure 11

The problem… hard delete operations

In the previous section, we look at the Recoverable Items folder and how we can use to restore items to our end-users but we will have the risk for a hard delete operation and if that occurs we will need a previous backup to bring that data back to a regular situation, but there is methods to overcome this challenge.
We can also have common situations where a user knows or has access to their mailbox before being fired, then the user removes all information from the mailbox and Recovery Deleted, and that will create a problem to restore the data afterwards.
To avoid these issues described above we can take advantage of a feature called Single Item Recovery, which enforces that the number of days defined on the database will be honored. When this Single Item Recovery feature is enabled, then all messages that are moved to the Purges folder will stay there for the time defined in the database.
In order to enable a user, we just need to run the following cmdlet Set-Mailbox <Mailbox> -SingleItemRecoveryEnabled $True, as shown in Figure 12.
Image
Figure 12
Using Single Item Recovery feature, we can have consistency among restore time for end-users and in some scenarios replace the current tapes/backup solutions. For instance, if you have Single Item Recovery enabled for all users and 30 days defined at the database, then we should never use tapes to restore for at least the last 30 days.

Conclusion

In this first article of our series, we saw the different types of deletions of an item from the restore perspective. In the last section, we also covered how we can enable Single Item Recovery feature on a user basis and we will take advantage of this feature in the next article of this series.

Remote Management with PowerShell (Part 1)

Introduction

The previous article in this series explored Active Directory Domain Services management with PowerShell. Now we will examine the remoting features in PowerShell 4.0 and explore the protocols, services, and configurations needed for remoting to function. There will be demonstrations to highlight how remoting works by getting information, creating objects, changing settings, and assigning user permissions to a group of computers remotely.
Advertisement

Windows PowerShell Remoting

Windows PowerShell remoting provides a method to transmit any command to a remote computer for local execution. The commands do not have to be available on the computer that originates the connection; it is enough if just the remote computers are able to execute the commands.
Windows PowerShell remoting relies on the web services managements (WS-Man) protocol. WS-Management is a distributed management task force (DMTF) open standard that depends on HTTP (or HTTPS) protocol. The Windows Remote Management (WinRM) service is the Microsoft implementation of WS-Management, WinRM is at the heart of Windows PowerShell remoting but this service can also be used by other non-PowerShell applications.
By default, WS-Man and PowerShell remoting use port 5985 and 5986 for connections over HTTP and HTTPS, respectively. This is much friendlier to network firewalls when compared to other legacy communication protocols such as the distributed component object model (DCOM) and remote procedure call (RPC), which use numerous ports and dynamic port mappings.
Remoting is enabled by default on Windows Server 2012 and it is required by the server manager console to communicate with other Windows servers, and even to connect to the local computer where the console is running. On client operating systems, such as Windows 7 or Windows 8, remoting is not enabled by default.
Once enabled, remoting registers at least one listener. Each listener accepts incoming traffic through either HTTP or HTTPS; listeners can be bound to one or multiple IP addresses. Incoming traffic specifies the intended destination or endpoint. These endpoints are also known as session configurations.
When traffic is directed to an endpoint, WinRM starts the PowerShell engine, hands off the incoming traffic, and waits for PowerShell to complete its task. PowerShell will then pass the results to WinRM, and WinRM handles the transmission of that data back to the computer that originated the commands.
While this article concentrates on the remoting feature of Windows PowerShell, it is worth noting that there are other remote connectivity protocols that are also used by specific PowerShell cmdlets. For instance, some cmdlets use the RPC protocol, others depend on the remote registry service. These numerous communication protocols demand additional configuration on the firewall to allow those PowerShell commands to be executed across the network.

Enabling PowerShell Remoting on a Local Computer

You may need to enable remoting on Windows clients, older Windows Server operating systems, or Windows Server 2012 if it has been disabled. However, keep in mind that remoting must be enabled only on computers that you will connect to; no configuration is needed on the computer from which you are sending the commands.
To manually enable remoting, run the Enable-PSremoting cmdlet as shown below:
Image
Figure 1
Running the Enable-PSremoting cmdlet makes the following changes to the computer:
  • Sets the WinRM service to start automatically and restart it.
  • Registers the default endpoints (session configurations) for use by Windows PowerShell.
  • Creates an HTTP listener on port 5985 for all local IP addresses.
  • Creates an exception in the Windows Firewall for incoming TCP traffic on port 5985.
If one or more network adapters in a computer are set to public (as an alternative to work or domain), you must use the –SkipNetworkProfileCheck parameter for the Enable-PSremotingcmdlet to succeed.
Running Get-PSSessionConfiguration exposes the endpoints created by Enable-PSremoting.
Image
Figure 2

Enabling PowerShell Remoting Using Group Policy

If you have a large number of computers, configuring a group policy object (GPO) may be a better option to enable remoting than manually executing the Enable-PSremoting cmdlet in each system.
The order is not important, but the following three steps must be completed for the GPO to trickle down effectively and enable remoting on your domain computers:
  • Create a Windows firewall exception for the WinRM service on TCP port 5985
  • Allow the WinRM service to automatically listen for HTTP requests
  • Set the WinRM Service to start automatically

Create a windows firewall exception for the WinRM service on TCP port 5985

  1. To create the firewall exception, use the Group Policy Management Console and navigate to Computer Configuration\Administrative Templates\Network\Network Connections \Windows Firewall\Domain Profile.
Image
Figure 3
  1. Right-click the Windows Firewall: Define inbound program exceptions and select Edit.
Image
Figure 4
  1. Click on Show and on the Show Contents dialog box; under Value enter the following line:5985:TCP:*:Enabled:WinRM as seen below:
Image
Figure 5

Allow the WinRM service to automatically listen for HTTP requests

  1. Again using Group Policy Management, that setting can be located under Computer Configuration\Administrative Templates\Windows Components\Windows Remote Management (WinRM)\WinRM Service.
Image
Figure 6
  1. Right-click Allow remote server management through WinRM and select Edit. Click on Enabled and specify the IPv4 and IPv6 filters, which define which IP addresses listeners will be configured on. You can enter the * wildcard to indicate all IP addresses.
Image
Figure 7
Set the WinRM Service to start automatically
  1. This setting can be found on Computer Configuration\Windows Settings\Security Settings\System Services\Windows Remote management (WS-Management).
Image
Figure 8
  1. Right-click Windows Remote management (WS-Management), select Properties and set the startup mode to “Automatic.”
Image
Figure 9
Once all the preceding GPO settings are completed and the group policy is applied, your domain computers within the policy scope will be ready to accept incoming PowerShell remoting connections.

Using Remoting

There are two common options for approaching remoting with PowerShell. The first is known as one-to-one remoting, in which you make a single remote connection and a prompt is displayed on the screen where you can enter the commands that are executed on the remote computer. On the surface, this connection looks like an SSH or telnet session, even though it is a very different technology under the hood. The second option is called one-to-many remoting and it is especially suited for situations when you may want to run the same commands or scripts in parallel to several remote computers.

One-to-One Remoting (1:1)

The Enter-PSSession cmdlet is used to start a one-to-one remoting session. After you execute the command, the Windows PowerShell prompt changes to indicate the name of the computer that you are connected to. See figure below.
Image
Figure 10
During this one-to-one session, the commands you enter on the session prompt are transported to the remote computer for execution. The commands’ output is serialized into XML format and transmitted back to your computer, which then deserializes the XML data into objects and carries them into the Windows PowerShell pipeline. At the session prompt, you are not limited to just entering commands, you can run scripts, import PowerShell modules, or add PSSnapins that are registered to the remote computer.
There are some caveats on this remoting feature that you need to be aware of. By default, WinRM only allows remote connections to the actual computer name; IP addresses or DNS aliases will fail. PowerShell does not load profile scripts on the remote computer; to run other PowerShell scripts; the execution policy on the remote computer must be set to allow it. If you use the Enter-PSSession cmdlet in a script, the script would run on the local machine to make the connection, but none of the script commands would be executed remotely because they were not entered interactively in the session prompt,

One-to-Many Remoting

With one-to-many remoting, you can send a single command or script to multiple computers at the same time. The commands are transported and executed on the remote computers, and each computer serializes the results into XML format before sending them back to your computer. Your computer deserializes the XML output into objects and moves them to the pipeline in the current PowerShell session.
The Invoke-Command cmdlet is used to execute one-to-many remoting connections. The -ComputerName parameter of the Invoke-Command accepts an array of names (strings); it can also receive the names from a file or get them from another source. For instance:
A comma-separated list of computers:
-ComputerName FS1,CoreG2,Server1
Reads names from a text file named servers.txt:
-ComputerName (Get-Content C:\Servers.txt)
Reads a CSV file named Comp.csv that has a computer column with computer names.
-ComputerName (Import-CSV C:\Comp.csv | Select –Expand Computer)
Queries Active Directory for computer objects
-ComputerName (Get-ADComputer –filter * | Select –Expand Name)
Here is an example of using remoting to obtain the MAC addresses of a group of computers:
<code>
 Invoke-Command -ComputerName FS1,CoreG2,Server1 -ScriptBlock `
{Get-NetAdapter |Select-Object -Property SystemName,Name,MacAddress |
Format-Table}
</code>
Here is the output:
Image
Figure 11
Here is another example: Let’s say that you need to create a folder on each computer to store drivers and, at the same time, you want to assign full control permission to a domain user, named User1, to access the folder. Here is one way you could code the solution:
<code>
Invoke-Command -ComputerName Fs1,CoreG2,Server1,Win81A `
-ScriptBlock {New-Item -ItemType Directory -Path c:\Drivers
$acl = Get-Acl c:\Drivers
$User1P = "lanztek\User1","FullControl","Allow"
$user1A =New-Object System.Security.AccessControl.FileSystemAccessRule $User1P
$acl.SetAccessRule($User1A)
$acl | set-acl c:\Drivers}
</code>
The preceding script may be run from any accessible computer in the network. It creates a folder named “Drivers” on the root of the C drive on each one of the computers that it touches.
The $aclvariable stores the security descriptor of the Drivers folders; $User1P defines the permission level for User1 (full control). The $User1A variable holds a new object that defines an access rule for a file or directory. $User1A is used to modify the security descriptor ($acl). The last line of the script pipes the modified security descriptor ($acl) to the Set-Acl cmdlet. Finally, the Set-Acl cmdlet applies the security descriptor to the Drivers folder.
Once the scripts executes, you get immediate confirmation that the folder has been created on each one of the remote computers.
Image
Figure 12
One-to-many remoting can be used again to verify that User1 has full control permission to the Drivers folder:
<code>
Invoke-Command -ComputerName Fs1,CoreG2,Server1,Win81A `
-ScriptBlock {get-acl c:\drivers |
Select-Object PSComputername,AccessToString}
</code>
Image
Figure 13
By default, remoting connects up to 32 computers at the same time. If you include more than 32 computers, PowerShell starts working with the first 32 and queues the remaining ones. As computers from the first batch complete their tasks, the others are pulled from the queue for processing. It is possible to use the Invoke-Command cmdlet with the -ThrottleLimit parameter to increase or decrease that number.

Persistent PSSessions

When using Invoke-Command with the –ComputerName parameter, the remote computer creates a new instance of PowerShell to run your commands or scripts, sends the results back to you, and then closes the session. Each time Invoke-Command runs, even if it does to the same computers, a new session is created and any work done by a previous session will not be available in memory to the new connection. The same can be said when you use the Enter-PSSession with the –ComputerName parameter and then exit the connection by closing the console or using the Exit-PSSession command.
It is good to know that PowerShell has the capability to establish persistent connections (PSSessions) by using the New-PSSession cmdlet. The New-PSSession allows you to launch a connection to one or more remote computers and starts an instance of Windows PowerShell on every target computer. Then you run Enter-PSSession or Invoke Command with their –Session parameter to use the existing PSSession instead of starting a new session. Now you can execute commands on the remote computer and exit the session without killing the connection. Superb!
In the following example, the New-PSSession cmdlet is used to create four different PSSessions; the PSSessions are stored in a variable names $Servers. Get-Content reads the computer names from a text file named Servers.txt and pass that information to New-PSSession via the -ComputerName parameter.
<code>
$Servers = New-PSSession -ComputerName (Get-Content c:\Servers.txt)
</code>
After running the command, typing $Servers or Get-PSSession will allow you to confirm that the sessions have been created.
Image
Figure 14

Closing Remarks

Remoting is a firewall-friendly feature that relies on the WS-Management (WS-Man) open standard protocol to function. Microsoft implements and manages WS-Man via the WinRM service. This article shows how to administer from a small to a large number of computers with Windows PowerShell remoting using interactive sessions, one-to-one, one-to-many, and persistent PSSessions. But wait, there’s more. We have not talked yet about multihop remoting, implicit remoting, managing non-domain computers, or PowerShell web access. Those and other topics will be explained and demonstrated in our next article in this series.
If you would like to be notified when Wilfredo Lanz releases the next part of this article series please sign up to the WindowsNetworking.com Real time article update newsletter.
If you would like to read the previous part in this article series please go to Using PowerShell to Manage AD and AD Users.

How to extend the life of your laptop battery

The battery in your brand-new laptop may get you through a day's work on the road. But in two years, probably not.

Siddhartha Raval asked how to take care of a laptop battery so that it lasts a long time.
Batteries don't last forever. Like everything except diamonds and viral tweets, they eventually wear out. But with proper care, a laptop battery can still carry a sufficient charge until you're ready to move on to a better laptop.
But it's a tradeoff. Taking the best care of your laptop battery just may be more of a hassle than it's worth.
[Have a tech question? Ask PCWorld Contributing Editor Lincoln Spector. Send your query toanswer@pcworld.com.]
So let me start with a less effective, but more practical approach:
When you're at home, running the laptop on AC power, and you believe that it will stay plugged in for a week or more, shut down the PC and remove the battery.
Then, when you need the battery, plug it back in. If it's been more than two months since you last used the battery, check it and charge it before taking it on the road.
Of course, you should never remove or insert a laptop battery while the laptop is running. Always shut it down first.
That's the practical approach. Here's the extreme care method:
For the absolute best results, never charge it past 80 percent or let it drop below 20 percent. When you're working on AC power, keep an eye on the battery's charging.  When it hits or passes 80 percent, shut down your computer, remove the battery, then reboot. When it's time to take the laptop on the road, shut it down again and reinsert the battery.
And when you're using the laptop on battery power, shut it down before the battery drops below 20 percent, and don't start it up again until you have AC power.
As I said, probably more of a hassle than it's worth.

Wednesday, October 29, 2014

Virtual Networks in Microsoft Azure (Part 1)

Introduction

In Part I of this article, you will learn how to create a new virtual network in Azure. You will also learn how to allow virtual machines and cloud services on different virtual networks to communicate across the Azure backbone by connecting Azure virtual networks to each other. Part II of this article will show you how to extend your on-premise network into the Microsoft Azure public cloud.
Advertisement

Microsoft Azure Virtual Network Overview

Just as you create virtual networks in Hyper-V to connect virtual machines (VM) to each other (private virtual network) or external networks (external virtual network), you can create virtual networks (VNet) in Microsoft Azure to connect VMs and services to each other (Cloud-Only VNet), and also connect an on-premise network to a Windows Azure virtual network (Cross-Premises VNet).
A Cloud-Only VNet is an isolated network that you create to connect virtual machines or cloud services to allow them to communicate across the Azure backbone. You must be proactive and plan out your VNet configuration before virtual machine and cloud service deployments because they acquire their network settings at deployment time. If you create a new VNet and want to connect existing VMs and cloud services to it, you will have to redeploy them to accomplish your objective.
A Cross-Premises VNet allows you to create a secure connection using a VPN device on your on-premises network to an Azure VNet Gateway. After the connection is established, resources connected to your on-premises network can communicate directly and securely with Azure resources connected to the Azure VNet. This is the type of configuration that you would implement if you were deploying a branch office with multiple devices that required access to resources deployed in Azure. It is also possible to setup point-to-site secure connections to an Azure VNet if you require the ability to configure connections from a limited number of on-premises devices. In this case, you configure a connection on each device using a VPN client. If you have on-premises infrastructure that requires fast speed, reliable, low latency, and higher security connections to Azure resources, you can use the Azure ExpressRoute service to build private connections that do not use the public Internet. With ExpressRoute, connections from on-premises networks are established at an ExpressRoute location, or from your wide area network (WAN) through a service provider.

Creating a Cloud-Only VNet

Creating a Cloud-Only VNet using the Azure Management Portal is a fairly easy process. After you create the VNet, you can deploy and connect virtual machines and cloud services that need to communicate with each other.
From your local system, log in to the Azure Management Portal, and follow this procedure to create a Cloud-Only VNet:
  1. Click New, found in the lower left-hand corner of the screen, as shown in Figure 1.
Image
Figure 1: Azure Management Portal Screen
  1. In the new pane, click Network Services, and then select Virtual Network, and then the Custom Createoption, as shown in Figure 2.
Image
Figure 2: Creating a VNet with Advanced Options
  1. On the Virtual Networks Details page, enter a virtual network name and select a location from the dropdown, as shown in Figure 3. The virtual network name can be anything you like, but you should develop and use a naming convention that identifies the purpose of the VNet. You should select the VNet location based on the region where you want to deploy your VMs and cloud services. Once selected, you cannot change the region associated with the VNet.
Image
Figure 3: Defining VNet Details
  1. On the DNS Servers and VPN Connectivity page, you do not need to make any changes. Azure will provide name resolution by default, as shown in Figure 4. Since you are creating a Cloud-Only VNet, you do not need to select the Site-to-Site or Point-to-Site Connectivity options.
Image
Figure 4: Defining DNS Server and VPN Connectivity
  1. On the Virtual Network Address Spaces page, you do not need to make any changes unless you require a specific subnet definition or internal IP address range, as shown in Figure 5. If you want to associate multiple subnets to the VNet, select the add subnet option. When you deploy new VMs to this VNet, Azure allocates IP addresses from the defined ranges to communicate within the VNet only.
Image
Figure 5: Defining VNet Address Spaces
  1. Once you click on the checkmark in the lower right-hand corner of the page, Azure creates the new VNet. The VNet then appears in the Management Portal, as shown in Figure 6.
Image
Figure 6: New VNet in Azure Management Portal
  1. When you create new VMs to deploy to the new VNet through the Azure Management Portal, you must select the From Gallery option to have the option to select the new VNet.

VNet to VNet Connection Background

If you deploy VMs and cloud services on different Azure VNets, and at a later time you require them to communicate with each other, you have to configure a VNet to VNet connection to create a communication path. A VNet to VNet connection requires the use of an Azure VPN gateway with dynamic routing. VNets can be connected using IPSEC tunnels, and a single VNet can connect to up to 10 VNet to VNet gateways. By default, a VNet only allows network traffic across a single VNet to VNet gateway connection. VNets can be in the same or different Azure subscriptions, and same or different Azure regions.
VNet to VNet connections can be configured either using a hub and spoke model or a daisy chain model, as shown in Figure 7 and 8, respectively.
Image
Figure 7: Hub and Spoke VNet Connection Model
In the hub and spoke model, a VM on VNet1 can communicate to VMs on VNet2, VNet3, VNet4, and VNet5. However, a VM on VNet2, VNet3, VNet4, or VNet5 can only communicate to a VM on VNet1 because of the default single hop isolation for a VNet to VNet connection.
Image
Figure 8: Daisy Chain VNet Connection Model
In a daisy chain model, a VM on VNet1 can communicate to VMs on VNet2, but not to VMs on VNet3, VNet4, and VNet5. However, a VM on VNet2 can communicate to a VM on VNet1 and VNet3, but not VNet4 or VNet5 because of the default single hop isolation.

Creating a VNet to VNet Connection

In order to create a VNet to VNet connection, you have to first ensure that the IP address ranges defined for each VNet do not overlap. For example, if you are connecting virtual networks named SouthCentralVNet1 and SouthCentralVNet2, then the address ranges should be unique as shown in Table 1.
Virtual   Network
Virtual   Network IP Address Definition
SouthCentralVNet1   (VNet1)
10.1.0.0/16
SouthCentralVNet2   (VNet2)
10.2.0.0/16
Table 1: Example VNet IP Address Ranges
Once the virtual networks are created, there are five more steps to perform before the VNet to VNet connection configuration is complete:
  • Configure each VNet to identify the other VNet as a local network site in Azure
  • Create dynamic routing gateways for each VNet
  • Configure each local network with the IP address of the local gateway
  • Configure a shared key for the VNet to VNet connection
  • Connect the VPN gateways
From your local system, log in to the Azure Management Portal, and follow this procedure to create a VNet to VNet connection for two existing Azure virtual networks with unique address ranges:
  1. On the Azure Management Portal, click New, then select Network Services, then Virtual Network, and then select the Add Local Network option, as shown in Figure 9.
Image
Figure 9: Add Local Network Option
  1. On the Specify your local network details page, enter the name of the first VNet that you want to connect, as shown in Figure 10. For the VPN Device IP Address, enter a placeholder address as you will come back and configure this parameter after Azure creates the gateway IP address a little later in the process.
Image
Figure 10: Configuring Local Network Details
  1. On the Specify the address space page, enter the actual IP address range that was created for VNet1, as shown in Figure 11.
Image
Figure 11: Configuring Local Network Address Space
  1. Repeat Steps 1 to 4 for VNet2 using the unique IP Address range defined for the virtual network in Azure.
  2. On the Networks page, click on VNet1 and then select the Configure page as shown in Figure 12.
Image
Figure 12: VNet1 Configure page
  1. In the site-to-site connectivity section, select Connect to the local network, and then select VNet2 as the local network, as shown in Figure 13.
Image
Figure 13: Selecting Site-to-Site Connectivity for VNet1
  1. In the virtual network address spaces section, click add gateway subnet,and then click the save icon, as shown in Figure 14.
Image
Figure 14: Adding a Gateway Subnet for VNet1
  1. Repeat Steps 5 to 7 for VNet2 and specify VNet1 as a local network.
  2. On the Dashboard page for VNet1, select Create Gateway as shown in Figure 15.
Image
Figure 15: Creating a Gateway for VNet1
  1. Make sure to select Dynamic Routing, as shown in Figure 16.
Image
Figure 16: Selecting Dynamic Routing for the VNet Gateway
  1. While Azure creates the gateway, which takes about 15 minutes, you will see the status shown in Figure 17.
Image
Figure 17: VNet1 Dashboard Status during Gateway Creation
  1. Repeat Steps 9 and 10 to create the gateway for VNet2. You do not need to wait for the first gateway to be created as Azure can create both gateways concurrently.
  2. When the gateway status changes to Connecting, retrieve the IP address for each Gateway from the Dashboard, as shown in Figure 18.
Image
Figure 18: Gateway IP address after Gateway Creation
  1. On the Add a Local Networks page, click on VNet1, and then click Edit at the bottom of the page. For the VPN Device IP Address, enter the IP address of the gateway that you recorded for VNet1, as shown in Figure 19.
Image
Figure 19: Local Network Gateway IP Address Configuration
  1. Repeat Step 14 for VNet2.
  2. The final step to set up the VPN gateway connection is to configure the pre-shared IPSEC key to the same value. You can accomplish this by using the following cmdlets in a Windows Azure PowerShell:
    Set-AzureVnetGatewayKey –VNetName SouthCentralVNet1 – LocalNetworkSiteName VNet2 – SharedKeyAB12cd34Set-AzureVnetGatewayKey –VNetName SouthCentralVNet2 – LocalNetworkSiteName VNet1 – SharedKeyAB12cd34
  3. After these cmdlets complete successfully, you can select the Connect option on the VNet Dashboard page, and the connection will initialize. Once the connection is initialized, the Dashboard will display the VNet to VNet connection, as shown in Figure 20.
Image
Figure 20: Successful VNet to VNet Connection

Conclusion

A Microsoft Azure Cloud-Only VNet default configuration restricts communications between resources deployed on that VNet. A VNet to VNet connection allows you to provide a communication path between resources deployed on two different VNets across the Azure backbone. The configuration of a VNet to VNet connection is very similar to the configuration of an on-premises network to an Azure VNet, which you will learn in Part II of this article.
If you would like to be notified when Janique Carbone releases the next part in this article series please sign up to our VirtualizationAdmin.com Real-Time Article Update newsletter.