Manually Activating a K5 Windows Server License

K5 IaaS includes a Key Management Service (KMS) for activating the bundled operating system license that is included with all Windows Server VMs. The URL of the regional KMS service is featured in the appendix of the published IaaS Features Guide, along with instructions for you to request a KMS batcg file from Fujitsu in order to activate your VMs. In this blog I show you a simple way to activate the VM without needing this batcg file.

Pre-reqs:

The KMS activation requires TCP 1688 access from the VM to the KMS service, as well as DNS in order to resolve the KMS URL. So make sure your Security Group and Firewalls at a minimum allow this traffic to the required below URL and that the VM is on a network that is attached to a router configured with an external gateway.

Steps:

From within the VM to be activated:

  1. Open a command prompt with elevated permissions (Right click and run as administrator)
  2. Run the following command to point Windows to the K5 KMS server:
    • cscript c:\windows\system32\slmgr.vbs -skms
    • KMS URLs option:
      • kms.uk-1.cloud.global.fujitsu.com (UK)
      • kms.jp-east-1.cloud.global.fujitsu.com (Japan East)
      • kms.jp-west-2.cloud.global.fujitsu.com (Japan West)
      • kms.fi-1.cloud.global.fujitsu.com (Finland)
  3. Run the following command to activate Windows:
    • cscript c:\windows\system32\slmgr.vbs -ato
  4.  If successful, a “Product Successfully Activated” message will be displayed within the command prompt.

Deploying a K5 IaaS Load Balanced Auto Scaling Stack

In this blog, I’m going to show you how to deploy a Load Balanced Auto Scaling Web Service on Fujitsu’s K5 IaaS platform, using a Heat stack.

The Heat stack template will create the Virtual Router, Network, Subnet, Security Groups, Load Balancer, monitoring, scale out/in rules, and configure them all together, allowing the full template web service to be deployed in a matter of minutes.

Before you start, you need to deploy, configure and template your web server. This can either be Apache, IIS or other web service of your choosing, as long as the actual web service is stateless, as no new data should be saved within the template.  Virtual Servers can and will be deleted during scale-in and out, so it is imperative that your template image is able to automatically connect to, read and write to an external data source using either API calls or an external database. All available operating system patches and updates should also be applied to the VM, before it is templated, with the template regularly refreshed with the latest updates as and when they are released. Make sure you also safely record any passwords that you may have used, so they can be remembered at a later date. If you are wanting to test/demo load on the CPU group, you may want to also install a free tool such as Heavyload

For the purposes of this example, I am using a simple Windows 2012 VM, configured with the default  IIS role and displaying the default  iisstart.html screen. As the VM is standalone and not part of a domain, I did not bother to run sysprep, although you may want to consider this, if you are deploying your system into a production or AD environment.

Cloning the VM Template

1. Once you are happy with your VM template, then it is necessary to shut the VM down and initiate a clone of its system disk, in order to create a private VM image.

2. Within the IaaS Portal, go to ‘Project | Compute | Storage’ and locate the storage assigned to your VM template (the name of the VM will be shown under ‘Connected Virtual Server’, then click on its hyperlinked ‘Storage Name’

3. Note (Copy) the Storage ID to be used in step 5

4. Establish your API connection and retrieve your authentication token. (This example assumes your token is stored within the variable $OS_AUTH_TOKEN used within K5 guides)

5. Declare the following variables for use within the clone API command:

VOLUME_ID=
NAME= e.g. iistemplate-v1.0
CONTAINER_FORMAT=bare
DISK_FORMAT=raw
FORCE=true

6.  Use the following command to start the clone process:

Note this can take some time (>1hour) to complete with the progress shown in the ‘Status’ column for the VM system disk, within the ‘List Storage’ screen. Initially the status will be shown as ‘uploading’, changing back to ‘in-use’ when the close process is complete.

7. Once complete, the VM image template will be available for deployment under ‘Project | Compute | Image’. Locate and click the hyperlink Image Name for this image, and note (copy) the Image ID to be used later within the heat stack file.

Using a Heat Stack to deploy your load balanced autoscaling web application

To walk you through the Heat stack, I’ll break it into sections. For the complete example stack, please see the bottom of this blog.

Within the parameters section, please provide your values for AZ, the ID of the external network or ID of the existing router, the IP address of any client computer you may want to use to remote onto the deployed Web Server VMs from (or leave as default to not allow any), the size of the VM to deploy, the ID of the VM image created in step 7 above and the name of the pre-existing keypair to use. You also have the option of changing the default load balancer name.

At the time of writing there is a known issue with creating external connected routers using a Heat stack, in that the router configuration looks correct, but no external traffic can pass through the router. To overcome this, you need to either deploy the router and configure the external gateway manually, then refer to the router ID. within your heat stack or simply reapply the external gateway settings within the router post creation. This example assumes you will do the latter, so the Heat stack creates the router and configures the external gateway for you.

In the next section we begin to declare the resources, starting with the external router, network, subnet and security groups . Again the code can be changed if you want to supply the ID of an existing router, just look at the “****” commented sections for guidance.

For the purpose of this example, I simply hardcoded the names of these resource and subnet information into this section, so you can if you wish changes this or integrate it with the parameters section of the stack.

The stack then creates a Web_Autoscale_Remote_Access Security Group, with 3 rules, two to allow either RDP/SSH for the remote management of your Windows Server/Linux web servers and one to allow outgoing HTTP traffic to 169.254.169.254. This is a special IP address used by the K5 Cloud Init process which must be present during the VM build process. Although technically not required at the moment as this traffic is allowed out anyway by the default outbound rules, you may want to remove these default rules post deployment to secure your environment further (I’m yet to find a way to delete these default rules within a Heat stack. If you do delete these rules manually, you will need to add a further rule to allow the load balancer to send HTTP traffic to the Web_Autoscale_HTTP SG, as this is currently allowed using the default rule.)

The Web_Autoscale_LoadBalancer SG simply allows inbound HTTP from anywhere and the Web_Autoscale_HTTP SG allows only inbound HTTP traffic from members of the LoadBalancer SG, i.e. the Load Balancer.

The next section configures the auto scaling groups and resources such as the load balancer, VM config and policies. See the comments alongside the code for further information and for areas for customisation, such as min/max number of VM to deploy etc.

The final section configures the alarms and triggers used to scale out and scale in the number of VM instances. The periods and thresholds I’ve used are purely an example and you should tailor these to suit your particular environment, VM size, operating system and application. You should also monitor these values and application performance over a period of time, to ensure auto scaling is working correctly for your implemenation. Note that the CPU % rate is the average across all your deployed VMs and not a single VM.

Currently there is a limitation with this stack, in that it will not remove VM instance that the load balance has detected as unhealthy. The tricky bit here will be determining which of the VMs has the problem and then having a trigger which only deletes that VM, rather than scaling down the number of VMs. I might look at this challenge in a future blog, when I have more time.

So putting it all together we have the following. This includes all the parameters I used in my environment, so make sure you update them before you run the stack:

Your completed Heat stack should be saved locally as a *.YAML file. Before running it, ensure that no router, security group or network exists with the same names as specified within your Heat stack, as this can cause unpredicable results

1. To run the Heat stack, within your K5 IaaS portal, browse to ‘Project | Compute | Stack | + Create Stack’.

2. In the resulting screen, enter a Stack Name, select File and browse to your YAML file. Then specify a suitable time value (I’ve specified 40 minutes as it can take around 30mins to deploy this Windows Heat Stack) and specify Delete to remove any deployed resources, if the stack deployment was to not fully complete.

stack-menu

Once the status of the stack is shown as “CREATE_COMPLETE”, you will need to reapply the Router settings as described above. To do this :

1. Select ‘Project | Network | Virtual Router’, then select ‘Action | Gateway Settings’ against the heat stack created router i.e. Web_Autoscale_router

2. Ensuring the ‘External Virtual Network’ drop down box is showing the name of an external network, e.g. inf_az1_ext-net02, click the ‘Set(ting)’ button.

After a few minutes, verify that the web site can be accessed using the load balance DNS name from ‘Project | Network | Load Balancer | i.e. Web_Autoscale_LB

webpage

VM instances will be created with an auto generated name based on the following format:

“First 2 characters of the stack name” + “-” + “Last 11 characters of the resource name of the AutoScalingGroup” + “-” + “Random ID (12 characters)” + “-” + “Random ID (12 characters)” + “-” + “Random ID (12 characters)”

Example: au-aling_group-knu4eeueo2c5-cyrtttd6lwbu-xsge7xcbkxum

FUJITSU K5 Security Groups in a Nutshell

Introduction to Security Groups

A security group (SG) is an easy to create, manage, simplistic virtual firewall that controls the traffic for one or more virtual resources, including virtual machines (VMs), virtual load balancers (LBs) and Network Connectors.

Each SG consists of a number of SG rules that allow traffic to or from the virtual resources assigned to that SG.

SGs actually perform packet filtering on the ports connected to the virtual resources network card (NIC), unlike the Firewall service that sets packet filters on the virtual router. This means that security groups need to be managed and applied for each individual NIC within the virtual resource. When deploying a VM with multiple NICs via the portal, all NICs will be assigned the same Security Groups you specified on the “Access and Security” tab of the wizard.

SGs use a white-list policy where packets from white-listed addresses and ports are accepted, but everything else is denied. In other words, if the communication does not appear on the whitelist, then it is rejected.

All rules can be deleted from a SG, effectively not allowing ANY access.

The scope of each SG is limited to a project, and spans an Availability Zone (AZ), meaning that virtual resources from either K5 AZ can be added to a SG.

On K5 you can have up to 20 SGs per project, with a total of 100 rules across all SGs, although these limits can be increased by raising a service request with Fujitsu.

Like other K5 objects, each SG is uniquely identified via an ID number, rather than its name. For this reason, it is possible to create multiple SGs with the same name, with each having different rules. Therefore care needs to be taken in multi administrator environments, to ensure the intended SG is assigned, as someone else may have created a group with the same name, but different rules.

The “Default” Security Group

Each new project is created with a default Security Group called “default” with a description of “default”. This cannot be deleted, and neither the name nor description can be amended either.

The default SG contains the following default rules:

  • Allow all inbound traffic from other members of the default security group (the security group specifies itself as a source security group in its inbound rules)
  • Allow all unrestricted outbound traffic from each member.

These rules can be added to or deleted as appropriate.

Although the default rules show IPv6, only IPv4 is currently supported by K5

When creating a new VM, the default SG is ticked by default, but this can be unticked as long as another SG is selected, as membership of at least on SG is mandatory.

The K5 VM creation process for both Windows Server and Linux VMs, requires that the VM has outbound access to 169.254.169.254 on port TCP 80 (HTTP). If the default Security Group or default rules that allow unrestricted outbound traffic are deleted or not applied to a VM, then a new alternative rule must be created to allow this traffic during its creation. Otherwise, it will not be possible to authenticate with and log into the VM post creation. The rule can later be removed once you have successfully logged into the VM once.

Custom Security Groups

When you create a security group, you must provide it with a name and a description. The name and description of the SG can be amended post creation. Each new SG is created with a default rule to allow all outgoing traffic. Although the default rules show IPv6, only IPv4 is currently supported by K5

A SG cannot be deleted whilst it still assigned to virtual resources, so the assignment either needs to be amended first or the actual virtual resources deleted, prior to the deletion of the SG.

When you associate multiple SG with a virtual resource, the rules from each SG are effectively aggregated to create one set of rules. (I’ve not yet hit a limit on the number of SG that can be added to a virtual resource)

Security Group Rules

Security group rules are always permissive; i.e. you can’t create rules that deny access.

Security groups are stateful — i.e. if you send a request from your VM, then the response to that traffic is allowed to flow in regardless of the defined inbound security group rules

You can add and remove rules at any time, with changes automatically applied to all members of the SG.

Each rule has either an IP Address/CIDR or another Security Group (from within the project) as its source (receive from) or destination (send to) location

Specifying a SG in a rule is the same as saying “all virtual resources” within this group. A rule can also contain the name of the SG it belongs to, meaning the rule is applied to all virtual resources within the owning SG. E.g. to allow each VM within the SG to send or receive to one another. (Note: this allows the internal IP Address of the VMs, not any public IP Address assigned to a VM)

Use CIDR address 0.0.0.0/0 to represent everyone, e.g. for inbound HTML requests to a website

For a specific IP address, enter /32, e.g. for inbound RDP or SSH

If there is more than one rule for a specific port, then the most permissive rule is applied. For example, if you have a rule that allows access to TCP port 3389 (RDP) from IP address 75.1.25.1 and another rule that allows access to TCP port 3389 from everyone, everyone has access to TCP port 22.