//Cloud notes from my desk -Maheshk

"Fortunate are those who take the first steps.” ― Paulo Coelho

[ServiceFabric] How to change/reset RDP password for Service Fabric VMSS/VMSS instances using Powershell

Today, I had this question suddenly asked by one of my colleague for his customer. I never tried this before but aware that it was not that straight forward to reset from Azure portal Smile. After searching my emails, I found a PS script recommended in the past. I was curious to test and share it. so quickly deployed a cluster and verified. It worked.

Login-AzureRmAccount
$vmssName = “mltnnode”
$vmssResourceGroup = “jailbird-SF-RG”
$publicConfig = @{“UserName” = “mikkyuname”}
$privateConfig = @{“Password” = “newpass@1234”}
$extName = “VMAccessAgent”
$publisher = “Microsoft.Compute”
$vmss = Get-AzureRmVmss -ResourceGroupName $vmssResourceGroup -VMScaleSetName $vmssName
$vmss = Add-AzureRmVmssExtension -VirtualMachineScaleSet $vmss -Name $extName -Publisher $publisher -Setting $publicConfig -ProtectedSetting $privateConfig -Type $extName -TypeHandlerVersion “2.0” -AutoUpgradeMinorVersion $true
Update-AzureRmVmss -ResourceGroupName $vmssResourceGroup -Name $vmssName -VirtualMachineScaleSet $vmss

image

For Linux:- https://azure.microsoft.com/en-us/blog/using-vmaccess-extension-to-reset-login-credentials-for-linux-vm/

Ps:- Allow few mins to go through this VMSS instance update. You can navigate to VMSS > Instances to see the update is over and in “running” state, so that you can start RDP with your new password.

2017-08-21 Posted by | Powershell, ServiceFabric | | Leave a comment

[Azure Service Fabric] Use of EnableDefaultServicesUpgrade property

Recently I had this issue where Service Fabric application upgrade fails to deploy as expected after changing the instance count in cloud.xml. Here is what I tried and error received.

problem:-

  1. Create a stateless project with latest Azure Service fabric sdk 5.5
  2. Deploy first with Stateless1_InstanceCount set to –1  (default)
  3. Now set Stateless1_InstanceCount to say 2 from cloud.xml and redeploy with upgrade option checked

While publishing this upgrade from visual studio, I saw a property value expected to be “true” but no clue at initial glance.

Visual studio error:-

1>—— Build started: Project: Application3, Configuration: Debug x64 ——
2>—— Publish started: Project: Application3, Configuration: Debug x64 ——
2>Started executing script ‘GetApplicationExistence’.
2>Finished executing script ‘GetApplicationExistence’.
2>Time elapsed: 00:00:01.5800095
——– Package started: Project: Application3, Configuration: Debug x64 ——
Application3 -> D:Cases_CodeApplication3Application3pkgDebug
——– Package: Project: Application3 succeeded, Time elapsed: 00:00:00.7978341 ——–
2>Started executing script ‘Deploy-FabricApplication.ps1’.
2>. ‘D:Cases_CodeApplication3Application3ScriptsDeploy-FabricApplication.ps1’ -ApplicationPackagePath ‘D:Cases_CodeApplication3Application3pkgDebug’ -PublishProfileFile ‘D:Cases_CodeApplication3Application3PublishProfilesCloud.xml’ -DeployOnly:$false -ApplicationParameter:@{} -UnregisterUnusedApplicationVersionsAfterUpgrade $false -OverrideUpgradeBehavior ‘None’ -OverwriteBehavior ‘SameAppTypeAndVersion’ -SkipPackageValidation:$false -ErrorAction Stop
2>Copying application package to image store…
2>Copy application package succeeded
2>Registering application type…
2>Register application type succeeded
2>Start upgrading application…
2>Unregister application type ‘@{FabricNamespace=fabric:; ApplicationTypeName=Application3Type; ApplicationTypeVersion=1.1.0}.ApplicationTypeName’ and version ‘@{FabricNamespace=fabric:; ApplicationTypeName=Application3Type; ApplicationTypeVersion=1.1.0}.ApplicationTypeVersion’ …
2>Unregister application type started (query application types for status).
2>Start-ServiceFabricApplicationUpgrade : Default service descriptions can not be modified as part of upgrade.
2>Modified default service: fabric:/Application3/Stateless1. To allow it, set EnableDefaultServicesUpgrade to true.
2>At C:Program FilesMicrosoft SDKsService
2>FabricToolsPSModuleServiceFabricSDKPublish-UpgradedServiceFabricApplication.ps1:248 char:13
2>+             Start-ServiceFabricApplicationUpgrade @UpgradeParameters
2>+             ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2>    + CategoryInfo          : InvalidOperation: (Microsoft.Servi…usterConnection:ClusterConnection) [Start-Servi
2>   ceFabricApplicationUpgrade], FabricException
2>    + FullyQualifiedErrorId : UpgradeApplicationErrorId,Microsoft.ServiceFabric.Powershell.StartApplicationUpgrade
2>
2>Finished executing script ‘Deploy-FabricApplication.ps1’.
2>Time elapsed: 00:00:22.5520036
2>The PowerShell script failed to execute.
========== Build: 1 succeeded, 0 failed, 1 up-to-date, 0 skipped ==========
========== Publish: 0 succeeded, 1 failed, 0 skipped ==========

Upon searching in our internal discussion forum, I noticed this property needs an update from resources.azure.com or through PS.

By default, we would be having this property set to “-1” in cloud.xml or application manifest xml. The value “-1” is default and it deploys to all available nodes. At situation, we  may need to reduce the instance count, so if that this is the case follow any of the option.

Option # 1 ( Update through resources.azure.com portal )

1) From the error message it is clear that, sf cluster expects a property “EnableDefaultServicesUpgrade” to be set it true to proceed this upgrade.

2) This link talks about adding sf cluster settings from resources.azure.com portal – https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-fabric-settings  ( refer the steps at the top of the page).

3) Update your cluster settings as below and wait for atleast 30-40 mins depends on the number of nodes etc.

         werer

4) After this PUT command, you would see a small banner message saying upgrading cluster in Portal.azure.com > sf cluster overview page blade.

5) Wait till the upgrade banner goes away so that you can run the GET command from resources.azure.com to confirm this value is reflecting or not.

Option#2: ( update through  PS )

You can use the below PS to update this value.

$ClusterName= “<your client connection endpoint > eg. abc.westus.cloudapp.azure.com:19000”

$Certthumprint = “xxxxxx5a813118ef9cf523a4df13d”

Connect-serviceFabricCluster -ConnectionEndpoint $ClusterName -KeepAliveIntervalInSec 10 `

-X509Credential `

-ServerCertThumbprint $Certthumprint  `

-FindType FindByThumbprint `

-FindValue $Certthumprint `

-StoreLocation CurrentUser `

-StoreName My

Update-ServiceFabricService -Stateless fabric:/KeyPair.WebService/KeyPairAPI -InstanceCount 2

https://docs.microsoft.com/en-us/powershell/module/servicefabric/update-servicefabricservice?view=azureservicefabricps 

Final step:-

After the settings update, now go back to Visual Studio (2017) and try publishing app upgrade. At this point, we should see application getting deployed without any error.

You can confirm this by checking the number of node where this app is deployed. From the service fabric explorer (SFX) portal, you could see our application deployed just in 2 nodes instead all the available nodes. 

I had 3 node cluster where I set the instance count to 2 to see the application reduction.

Note:- The only caveat here is, we won’t be seeing the SFX portal manifest having this latest instance count value reflected. It would still show “-1” which you can ignore.

2017-05-24 Posted by | ARM, Azure, PaaS, Powershell, ServiceFabric | | 1 Comment

[Azure Powershell] How to improve the performance of BlobCopy by placing inline C#.NET code

Performance issues are always hard and tricky to fix. Recently I had this ask to check why the given Powershell execution takes time and gets slower on heavy iteration. In the beginning, copy operation used to take only few mins but over a heavy iteration, we saw the performance degraded to 2x, 3x..

Sample looping of copy command,

    foreach($SrcBlob in $SrcBlobs)
    {
          $DestBlob = “root/” + $SrcBlob.Name
          Start-AzureStorageBlobCopy -SrcBlob $SrcBlob.Name -SrcContainer $SrcContainerName -Context $SrcContext -DestBlob $DestBlob -DestContainer $DestContainerName -DestContext $DestContext -Force
    }

We have also used measure-command to list the execution duration and print the value for each iteration and also the sum at the end of the loop to confirm. We ran the loop by copying 5000+ times b/w storage accounts and found that, the Powershell command let execution gets slower on iteration for some reason.  Later on investigation, we confirmed this was due to known limitation with Powershell.

   $CopyTime = Measure-Command{

         Start-AzureStorageBlobCopy -SrcBlob $SrcBlob.Name -SrcContainer $SrcContainerName -Context $SrcContext -DestBlob $DestBlob -DestContainer $DestContainerName -DestContext $DestContext -Force

    }

Yes, there is a known issue with PowerShell running slow while iterating large loops due to the nature how PowerShell works, as in the 16th time of the iteration the content of the loop must be compiled dynamically and the .Net will need to run some extra checks. Thanks for some of our internal folks to provide clarity on this. To overcome this, there is a workaround suggested which is what we are here. Yes, we replaced with .Net code having copy logic inside the Powershell script to improve the performance. In this way, the security check only running once instead every 16 time. You will find the detailed information here –> Why cant PowerShell run loops fast ? https://blogs.msdn.microsoft.com/anantd/2014/07/25/why-cant-powershell-run-loops-fast/

How to place the C# as inline code within Powershell:-

$reflib = (get-item “c:tempMicrosoft.WindowsAzure.Storage.dll”).fullname

[void][reflection.assembly]::LoadFrom($reflib)

$Source = @”

using Microsoft.WindowsAzure.Storage;

using Microsoft.WindowsAzure.Storage.Auth;

using Microsoft.WindowsAzure.Storage.Blob;

using System;

 

namespace ns{

public static class copyfn1{

   public static void Copy_bw_SA_Blobs_Test(string sourceAccountKey, string destAcKey, string SrcSAName, string DestSAName, string SrcContainerName, string DestContainerName, string SrcPrefix, string DestPrefix){

       StorageCredentials scSrc = new StorageCredentials(SrcSAName, sourceAccountKey);

       CloudStorageAccount srcAc = new CloudStorageAccount(scSrc, true);

       CloudBlobClient cbcSrc = srcAc.CreateCloudBlobClient();

       CloudBlobContainer contSrc = cbcSrc.GetContainerReference(SrcContainerName);

 

      //Generate SAS Key and use it for delegate access

      SharedAccessBlobPolicy sasConstraints = new SharedAccessBlobPolicy();

      sasConstraints.SharedAccessExpiryTime = DateTime.UtcNow.AddHours(24);

      sasConstraints.Permissions = SharedAccessBlobPermissions.Write

          | SharedAccessBlobPermissions.List | SharedAccessBlobPermissions.Add

          | SharedAccessBlobPermissions.Delete | SharedAccessBlobPermissions.Create

          | SharedAccessBlobPermissions.Read;

      string sasContainerToken = contSrc.GetSharedAccessSignature(sasConstraints);

      //Return the URI string for the container, including the SAS token.

      string containersas = contSrc.Uri + sasContainerToken;

      CloudBlobContainer container = new CloudBlobContainer(new Uri(containersas));

 

      //Destination account – no SAS reqd

      StorageCredentials scDst = new StorageCredentials(DestSAName, destAcKey);

      CloudStorageAccount DstAc = new CloudStorageAccount(scDst, true);

      CloudBlobClient cbcDst = DstAc.CreateCloudBlobClient();

      CloudBlobContainer contDst = cbcDst.GetContainerReference(DestContainerName);

 

      foreach (var eachblob in container.ListBlobs(SrcPrefix, true, BlobListingDetails.Copy)){

         CloudBlob srcBlob = (CloudBlob)eachblob;

         string srcpath = srcBlob.Name;

         string dstpath = (DestPrefix != “”) ? srcpath.Replace(SrcPrefix, DestPrefix) : srcpath;

                       Console.WriteLine(“Files copying-” + dstpath);

       

         if (srcBlob.BlobType == BlobType.BlockBlob){

             CloudBlockBlob dstblob = contDst.GetBlockBlobReference(dstpath);

             dstblob.StartCopy((CloudBlockBlob)srcBlob);

            }

        else if (srcBlob.BlobType == BlobType.AppendBlob){

           CloudAppendBlob dstblob = contDst.GetAppendBlobReference(dstpath);

           dstblob.StartCopy((CloudAppendBlob)srcBlob);

           }

        else if (srcBlob.BlobType == BlobType.PageBlob){

          CloudPageBlob dstblob = contDst.GetPageBlobReference(dstpath);

          dstblob.StartCopy((CloudPageBlob)srcBlob);

          }

       }

     }

  }

}

“@

Add-Type ReferencedAssemblies $reflib TypeDefinition $Source -Language CSharp PassThru

[ns.copyfn1]::Copy_bw_SA_Blobs_Test(“acc_key1”, “acc_key2”, “storage_acc1”, “storage_acc2”

, src_container_name, dest_container_name, “sales/2017/Jan”

, “sales/2017/backup/”)

 

With this inline C# code, we were able to optimize the copy time duration and also able to mitigate the  performance degrade over a heavy loop.

Let me know if this helps in someway or issue with this.

2017-04-17 Posted by | .NET, Azure, C#, Powershell | | Leave a comment

[Azure Storage] How to call Storage API’s from Powershell (without SDK)

Recently I had this ask from a partner who wanted sample code for making REST API call to storage without our SDK’s route. Though we have array of SDK’s supporting many languages, still he wanted to make clean REST calls without SDK’s. It some time to understand and create this proof of concept. So sharing here to easy reference. Hope this helps in some way..

$accountname=“your_storage_accname”

$key = “acc_key”

$container=“container_name”

#file to create

$blkblob=“samplefile.log”

$f = “C:\temp\samplefile.log”

$BlobOperation = PUT

$body = (Get-Content -Path $f -Raw)

$filelen = $body.Length

#added this per comments in the below blog post.

$filelen = (Get-ChildItem -File $f).Length

$RESTAPI_URL = “https://$accountname.blob.core.windows.net/$container/$blkblob;

$date=(Get-Date).ToUniversalTime()

$datestr=$date.ToString(“R”);

$datestr2=$date.ToString(“s”)+“Z”;

$strtosign = $BlobOperation`n`n`n$filelen`n`n`n`n`n`n`n`n`nx-ms-blob-type:BlockBlob`nx-ms-date:$datestr`nx-ms-version:2015-04-05`n/”

$strtosign = $strtosign + $accountname + “/”

$strtosign = $strtosign + $container

$strtosign = $strtosign + “/” +$blkblob

 

write-host $strtosign

 

[byte[]]$dataBytes = ([System.Text.Encoding]::UTF8).GetBytes($strtosign)

$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256

$hmacsha256.Key = [Convert]::FromBase64String($key)

$sig = [Convert]::ToBase64String($hmacsha256.ComputeHash($dataBytes))

$authhdr = “SharedKey $accountname`:$sig

 

write-host $authhdr

 

$RequestHeader = New-Object “System.Collections.Generic.Dictionary[[String],[String]]”

 

$RequestHeader.Add(“Authorization”, $authhdr)

$RequestHeader.Add(“x-ms-date”, $datestr)

$RequestHeader.Add(“x-ms-version”, “2015-04-05”)

$RequestHeader.Add(“x-ms-blob-type”,“BlockBlob”)

 

#create a new PS object to hold the response JSON

$RESTResponse = New-Object PSObject;

$RESTResponse = (Invoke-RestMethod -Uri $RESTAPI_URL -Method put -Headers $RequestHeader -InFile $f);

 

write-host $RESTResponse

write-host “# Success !!! uploaded the file >>” $RESTAPI_URL

————————————————————————————————————————————————————-

$accountname=“your_storage_accname”

$key = “acc_key”

$container=“container_name”

$blkblob=“file_for_deletion”

$BlobOperation = DELETE

 

$RESTAPI_URL = “https://$accountname.blob.core.windows.net/$container/$blkblob;

$date=(Get-Date).ToUniversalTime()

$datestr=$date.ToString(“R”);

$datestr2=$date.ToString(“s”)+“Z”;

 

$strtosign = $BlobOperation`n`n`n`n`n`n`n`n`n`n`n`nx-ms-blob-type:BlockBlob`nx-ms-date:$datestr`nx-ms-version:2015-04-05`n/”

 

$strtosign = $strtosign + $accountname + “/”

$strtosign = $strtosign + $container

$strtosign = $strtosign + “/” +$blkblob

write-host $strtosign

 

[byte[]]$dataBytes = ([System.Text.Encoding]::UTF8).GetBytes($strtosign)

$hmacsha256 = New-Object System.Security.Cryptography.HMACSHA256

$hmacsha256.Key = [Convert]::FromBase64String($key)

$sig = [Convert]::ToBase64String($hmacsha256.ComputeHash($dataBytes))

$authhdr = “SharedKey $accountname`:$sig

write-host $authhdr

 

$RequestHeader = New-Object “System.Collections.Generic.Dictionary[[String],[String]]”

$RequestHeader.Add(“Authorization”, $authhdr)

$RequestHeader.Add(“x-ms-date”, $datestr)

$RequestHeader.Add(“x-ms-version”, “2015-04-05”)

$RequestHeader.Add(“x-ms-blob-type”,“BlockBlob”)

$RESTResponse = New-Object PSObject;

write-host $RESTAPI_URL

 

$RESTResponse = (Invoke-RestMethod -Uri $RESTAPI_URL -Method Delete -Headers $RequestHeader);

 

write-host “# Success !!! deleted the input file >>” $RESTAPI_URL

reference:-

https://dzone.com/articles/examples-windows-azure-storage

https://docs.microsoft.com/en-us/azure/storage/storage-introduction#storage-apis-libraries-and-tools

2017-04-06 Posted by | Azure, Powershell, Storage | | 3 Comments

Add-AzureAccount issue: Your Azure credentials have not been set up or have expired

For some reason, I started experiencing this issue. Tried clearing the local cache, temp folders or cookies etc but none helped. So grabbed fiddler log to see what is going wrong. Noticed it was having the very old session token (3-4 months older) getting served for some reason and no clue where it lies also. Checked few command lets to clear this out within PS and noted Clear-AzureProfile flushed the older tokens which resolved this issue.

PS C:WINDOWSsystem32> Add-AzureAccount

Id Type Subscriptions Tenants

— —- ————- ——-

xxxx@microsoft.com User xx-c5bc-xx-a7e0-xx{xxx-86f1-41af-91ab-xxxx}

PS C:WINDOWSsystem32> Get-AzureRoleSize

Get-AzureRoleSize : Your Azure credentials have not been set up or have expired, please run Add-AzureAccount to set up your Azure credentials.

At line:1 char:1

+ Get-AzureRoleSize

+ ~~~~~~~~~~~~~~~~~

+ CategoryInfo : CloseError: (:) [Get-AzureRoleSize], ArgumentException

+ FullyQualifiedErrorId : Microsoft.WindowsAzure.Commands.ServiceManagement.HostedServices.AzureRoleSizeCommand

 

Solution:-

Run the Clear-AzureProfle Commandlet and then try Add-AzureAccount to get the valid bearer token to continue.

PS C:WINDOWSsystem32> Clear-AzureProfile –Force

Let me know if you see this issue and root cause for this.

Happy scripting…

2017-01-19 Posted by | AAD, Azure, Powershell | | 9 Comments

How to specify VNet details when creating New-AzureBatchPool compute nodes

Recently I had an ask from a developer to check Azure Powershell Command let> New-AzureBatchPool script execution issue. He wanted to specify the VNet details as a parameter to this command so that the batch pool nodes would created with this detail. Unfortunately, we did not have any sample to refer or validate parameter. We spent quite amount of time tweaking the parameter to see the effect but no luck. Lately found a link where these details are explained enough to try on our own.

Please note, failing to follow these condition would throw errors. Suggest to start this https://docs.microsoft.com/en-us/azure/batch/batch-api-basics#pool-network-configuration and also this https://msdn.microsoft.com/library/azure/dn820174.aspx#bk_netconf. Each and every condition in the below list matters.

  • The specified Virtual Network (VNet) must be in the same Azure region as the Azure Batch account.
  • The specified VNet must be in the same subscription as the Azure Batch account.
  • The specified VNet must be a Classic VNet. VNets created via Azure Resource Manager are not supported.
  • The specified subnet should have enough free IP addresses to accommodate the “targetDedicated” property. If the subnet doesn’t have enough free IP addresses, the pool will partially allocate compute nodes, and a resize error will occur.
  • The “MicrosoftAzureBatch” service principal must have the “Classic Virtual Machine Contributor” Role-Based Access Control (RBAC) role for the specified VNet. If the specified RBAC role is not given, the Batch service returns 400 (Bad Request).
  • The specified subnet must allow communication from the Azure Batch service to be able to schedule tasks on the compute nodes. This can be verified by checking if the specified VNet has any associated Network Security Groups (NSG). If communication to the compute nodes in the specified subnet is denied by an NSG, then the Batch service will set the state of the compute nodes to unusable.
  • This property can be specified only for pools created with cloudServiceConfiguration. If this is specified on pools created with the virtualMachineConfiguration property, the Batch service returns 400 (Bad Request).

Working Powershell command let for easy reference:-

Add-AzureRmAccount
Select-AzureRmSubscriptionSubscriptionName “xxxxx Azure xxx xxxx – xxxx”

$batchcontext = Get-AzureRmBatchAccountKeysAccountName nicoloasbatch

$objectvnetconf = New-ObjectTypeName Microsoft.Azure.Commands.Batch.Models.PSNetworkConfiguration

$objectvnetconf.SubnetId = “/subscriptions/xxxxxxxxxxxxxxxxxx/resourceGroups/nicoloasbatch/providers/Microsoft.ClassicNetwork/virtualNetworks/nicolasclassicvnet/subnets/mysubnet1”

$configuration = New-ObjectTypeName “Microsoft.Azure.Commands.Batch.Models.PSCloudServiceConfiguration”ArgumentList @(4,”*”)

New-AzureBatchPool -Id “MikkybatchPool” –VirtualMachineSize “Small” –TargetDedicated 1 –BatchContext $batchcontextNetworkConfiguration $objectvnetconfCloudServiceConfiguration $configuration

How to specify RBAC details, explained in screenshots.

> The “MicrosoftAzureBatch” service principal must have the “Classic Virtual Machine Contributor” Role-Based Access Control (RBAC) role for the specified VNet. If the specified RBAC role is not given, the Batch service returns 400 (Bad Request).

Step1:-

image

Step 2:-

image

Step 3:-

image

How to verify whether it is successfully executed or not.

2

On successful execution…

image

Reference:-

Pool network configuration- https://docs.microsoft.com/en-us/azure/batch/batch-api-basics#pool-network-configuration

Add a pool to an account(networkConfiguration) https://msdn.microsoft.com/library/azure/dn820174.aspx#bk_netconfhttps://msdn.microsoft.com/library/azure/dn820174.aspx#bk_netconf

Thanks to Marie-Magdelaine Nicolas for sharing the powershell command let.

2016-11-25 Posted by | Azure, Azure Batch, PaaS, Powershell | | Leave a comment