//Cloud notes from my desk -Maheshk

"Fortunate are those who take the first steps.” ― Paulo Coelho

Why Azure Kubernetes Service(AKS) vs Others

What is AKS?
– deploy a managed Kubernetes cluster in Azure.
– reduces the complexity and operation overhead of managing
K8s by offloading much of that responsibility to Azure
– handles critical tasks like health monitoring and maintenance for you.
– masters are managed by Azure and You only manage and maintain the agent nodes.
– free, you only pay for the agent nodes and not for the master

tst3

Why AKS vs Others?
– Streamlined application onboarding with integrated VSTS CI/CD via DevOps Project
– Deep integration with Azure Monitor and Log Search
– Using Azure Dev Spaces for AKS – enables multiple developers to collaborate and rapidly iterate/debug microservices directly in AKS dev environment
– Open source thought leadership through projects like Virtual Kubelet, Helm, Draft, Brigade & Kashti & our contribution to the open source community
– Support for scenarios such as elastic bursting using Azure Container Instance (ACI) and Virtual Kubelet
– Users can use Key Vault for increased security and control over Kubernetes keys and passwords, create and import encryption keys in minutes
– Developers and operations can be assured their workloads will have Automated OS & Framework Patching with ACR Build
– Rich Tooling Support  VS Code/VS integration (VSCode is a free code editor; try today, you’ll thank us )

Best practice guidance
———————-
> For integration with existing virtual networks or on-premises networks, use advanced networking in AKS.
> greater separation of resources and controls in an enterprise environment

Two different ways to deploy AKS clusters into virtual networks:
+ Basic networking – Azure manages the virtual network resources as the cluster is deployed and uses the kubenet Kubernetes plugin.
+ Advanced networking – Deploys into an existing virtual network, and uses the Azure Container Networking Interface (CNI) Kubernetes plugin. Pods receive individual IPs that can route to other network services or on-premises resources.
The Container Networking Interface (CNI) is a vendor-neutral protocol that lets the container runtime make requests to a network provider. The Azure CNI assigns IP addresses to pods and nodes, and provides IP address management (IPAM) features as you connect to existing Azure virtual networks. Each node and pod resource receives an IP address in the Azure virtual network, and no additional routing is needed to communicate

$ az aks create –resource-group myAKSCluster –name myAKSCluster –generate-ssh-keys \
–aad-server-app-id \
–aad-server-app-secret \
–aad-client-app-id \
–aad-tenant-id

$ az aks get-credentials –resource-group myAKSCluster –name myAKSCluster –admin
Merged “myCluster” as current context ..

$ kubectl get nodes

NAME STATUS ROLES AGE VERSION
aks-nodepool1-42032720-0 Ready agent 1h v1.9.6
aks-nodepool1-42032720-1 Ready agent 1h v1.9.6
aks-nodepool1-42032720-2 Ready agent 1h v1.9.6

2019-03-06 Posted by | AKS, Azure Dev, Kubernetes, Linux, Microservices, PaaS | | Leave a comment

[LFCS] Commands to manage and configure containers in Linux

– LXC(Linux container) is an OS level virtualization for running multiple isolated Lx systems (containers) using single kernel
– LXC combines the kernel’s cgroups (Control Groups) to provide isolated space for our application
– Lx kernel provides the cgroups functionality + namespace isolation –> cgroups is the brain behind the virtualization
– cgroups provides { resource limiting, prioritization, accounting and control }
– various projects use cgroups as their basis, including Docker, CoreOS, RH, Hadoop, libvirt, LXC, Open Grid/Grid Engine, Kubernetes, systemd, mesos and mesoshpere
– initial version of Docker had LXC as execution environment, but later replaced with libcontianer written in go lang
– both dockers and LMCTFY taken over the containers space and used by many companies

image

>>>LXC – Linux container commands

$ sudo -i {switch to root account}
$ apt update
$ free -m { check your memory availability, -m for MB, -G for GB }
$ apt install lxc  { linux container, docker is based on this/type of }
$ systemctl status lxc.service
$ systemctl enable lxc
$ lxc ->tab -> tab  { to see all the lxc- commands }
$ cd /usr/share/lxc/templates
$ ls { should see the list of templates }
$ lxc-create -n mylxcontainer -t ubuntu { should create ubuntu container based on the specified template}
$ lxc-ls { list the local container, ubuntu should appear with name mylxcontainer }
$ lxc-info -n mylxcontainer { should see the status as STOPPED }
$ lxc-start -n mylxcontainer
$ lxc-info -n mylxcontainer { should see the state as RUNNING }
$ lxc-console -n mylxcontainer { console login to the container, username -ubuntu, pass-ubuntu }
$ ubuntu@mylxcontainer:~$ { upon login your console prompt changes takes you to ubuntu }
$ uname -a or hostname { to confirm you are within the container }
$ Type <cntrl+a q> to exit the console
$ lxc-stop -n mylxcontainer
$ lxc-destroy -n mylxcontainer

>>>Docker container commands
$ apt update
$ free -m
$ apt install docker.io
$ systemctl enable docker
$ systemctl start docker
$ systemctl status docker
$ docker info
$ docker version
$ docker run hello-world { to pull the hello-world for testing }
$ docker ps
$ docker ps -la or $ docker ps -a { list all the containers }
$ docker search apache or microsoft { to search container by name }
$ docker images { to list all the images in localhost }
$ docker pull ubuntu
$ docker run -it –rm -p 8080:80 nginx  { for nginx, -it for interative }
$ docker ps -la { list all the containers, look for container_id, type first 3 letters which is enough }
$ docker start container_id or ubuntu { say efe }
$ docker stop efe
$ docker run -it ubuntu bash
$ root@efe34sdsdsds:/# { takes to container bash }
<type cntrl p + cntrl q> to switch back to terminal
$ docker save debian -o mydebian.tar
$ docker load -i mydebian.tar
$ docker export web-container -o xyz.tar
$ docker import xyz.tar
$ docker logs containername or id
$ docker logs -f containername or id { live logs or streaming logs }
$ docker stats
$ docker top container_id
$ docker build -t my-image dockerfiles/ or $ docker build -t aspnet5 .  { there is a dot at the end to pick the local yaml file for the build }

>>>for working with Azure Container

$ az acr login –name myregistry
$ docker login myregistry.azurecr.io -u xxxxxxxx -p myPassword
$ docker pull nginx
$ docker run -it –rm -p 8080:80 nginx { Browse to http://localhost:8080  }
{To stop and remove the container, press Control+C.}
$ docker tag nginx myregistry.azurecr.io/samples/nginx
$ docker push myregistry.azurecr.io/samples/nginx
$ docker pull myregistry.azurecr.io/samples/nginx
$ docker run -it –rm -p 8080:80 myregistry.azurecr.io/samples/nginx
$ docker rmi myregistry.azurecr.io/samples/nginx
$ docker inspect -f “{{ .NetworkSettings.Networks.nat.IPAddress }}” nginx
$ az acr repository delete –name myregistry –repository samples/nginx –tag latest –manifest
$ docker run -d redis (By default, Docker will run a command in the fg. To run in the bg, the option -d needs to be specified.)
$ docker run -d redis:latest
$ docker start $(docker ps -a -q)
$ docker rm -f $(docker ps -a -q)

>>>docker minified version
$ docker pull docker.io/httpd
$ docker images
$ docker run httpd
$ docker ps [-a | -l]
$ docker info
$ docker run httpd
$ curl http://172.17.0.2  <ctrl+c>
$ docker stop httpd
$ docker rmi -f docker.io/httpd
$ systemctl stop docker

Happy learning !

2018-05-27 Posted by | LFCS, Linux, Microservices, Open Source | | Leave a comment

[Service Fabric] How to Secure a standalone cluster (On Prem)

This blog post is based on this article –https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server; I ran into issue, so would like to break into step by step for easier reference along with precautions.

Step 1 and 2 objective is same. All we need here is, just make sure to have NETWORK SERVICE added and have permission set.

1) Install the certificate in server node :-  Console Root > Local Computer > Personal > Certificates > install at this level and hit refresh to confirm

clip_image001

clip_image002

Once after the installation, right click the cert > All tasks > Manage private keys > Add NETWORK SERVICE and provide the default permission as it is and save. we should see “Allow” for Full control & Read permission.

2) Alternatively you could also achieve the same using PS. Open the PS ISE window, run the below PS in Admin mode to make this update. This step is optional if you have already performed the step #1 manually.

param

(

[Parameter(Position=1, Mandatory=$true)]

[ValidateNotNullOrEmpty()]

[string]$pfxThumbPrint,

[Parameter(Position=2, Mandatory=$true)]

[ValidateNotNullOrEmpty()]

[string]$serviceAccount

)

$cert = Get-ChildItem -Path cert:LocalMachineMy | Where-Object -FilterScript { $PSItem.ThumbPrint -eq $pfxThumbPrint; }

# Specify the user, the permissions and the permission type

$permission = “$($serviceAccount)”,”FullControl”,”Allow”

$accessRule = New-Object -TypeName System.Security.AccessControl.FileSystemAccessRule -ArgumentList $permission

# Location of the machine related keys

$keyPath = Join-Path -Path $env:ProgramData -ChildPath “MicrosoftCryptoRSAMachineKeys”

$keyName = $cert.PrivateKey.CspKeyContainerInfo.UniqueKeyContainerName

$keyFullPath = Join-Path -Path $keyPath -ChildPath $keyName

# Get the current acl of the private key

$acl = (Get-Item $keyFullPath).GetAccessControl(‘Access’)

# Add the new ace to the acl of the private key

$acl.SetAccessRule($accessRule)

# Write back the new acl

Set-Acl -Path $keyFullPath -AclObject $acl -ErrorAction Stop

# Observe the access rights currently assigned to this certificate.

get-acl $keyFullPath| fl

———————-

Parameter:-

On execution, enter your cert thumbprint and service account details as below.

pfxThumbPrint: AA4E00A783B246D53A88433xxxx55F493AC6D7

serviceAccount: NETWORK SERVICE

Output:-

Path   : Microsoft.PowerShell.CoreFileSystem::C:ProgramDataMicrosoftCryptoRSAMachineKeys

Owner  : NT AUTHORITYSYSTEM

Group  : NT AUTHORITYSYSTEM

Access : Everyone Allow  Write, Read, Synchronize

         NT AUTHORITYNETWORK SERVICE Allow  FullControl

         BUILTINAdministrators Allow  FullControl

Audit  :

Sddl   : O:SYG:SYD:PAI(A;;0x12019f;;;WD)(A;;FA;;;NS)(A;;FA;;;BA)

3) Step (1 or 2) is the only change required at Server side for certificate.

Now start downloading > “Download the Service Fabric standalone package” https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-creation-for-windows-server and extract say C:WindowsServiceFabricCluster

4) Pick this template “ClusterConfig.X509.DevCluster” json template file and update with your thumbprint and save it.

Ps:- I have removed secondary certificate and proxy certificate section for simplicity

   “security”: {

            “metadata”: “The Credential type X509 indicates this is cluster is secured using X509 Certificates. The thumbprint format is – d5 ec 42 56 b9 d5 31 24 25 42 64.”,

            “ClusterCredentialType”: “X509”,

            “ServerCredentialType”: “X509”,

            “CertificateInformation”: {

                “ClusterCertificate”: {

                    “Thumbprint”: “AA4E00A783B246D53Axxxxx3203855F493AC6D7”,

                    “X509StoreName”: “My”

                },

                “ServerCertificate”: {

                    “Thumbprint”: “AA4E00A783B246D53A8xxxxxx3855F493AC6D7”,

                    “X509StoreName”: “My”

                },

                “ClientCertificateThumbprints”: [

                    {

                        “CertificateThumbprint”: “AA4E00A783B24xxxxx203855F493AC6D7”,

                        “IsAdmin”: false

                    },

                    {

                        “CertificateThumbprint”: “AA4E00A783B246D5xxxxxx203855F493AC6D7”,

                        “IsAdmin”: true

                    }

                ]

            }

        },

5) Now run the PS command let from this directory to create the cluster

PS C:WindowsServiceFabricCluster>.CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .ClusterConfig.X509.DevCluster.json -AcceptEULA

Creating Service Fabric Cluster…

If it’s taking too long, please check in Task Manager details and see if Fabric.exe for each node is running. If not, please look at: 1. traces in DeploymentTraces directory and 2. traces in FabricLogRoot

configured in ClusterConfig.json.

Trace folder already exists. Traces will be written to existing trace folder: C:tempMicrosoft.Azure.ServiceFabric.WindowsServerDeploymentTraces

Running Best Practices Analyzer…

Best Practices Analyzer completed successfully.

Creating Service Fabric Cluster…

Processing and validating cluster config.

Configuring nodes.

Default installation directory chosen based on system drive of machine ‘localhost’.

Copying installer to all machines.

Configuring machine ‘localhost’.

Machine localhost configured.

Running Fabric service installation.

Successfully started FabricInstallerSvc on machine localhost

Successfully started FabricHostSvc on machine localhost

Your cluster is successfully created! You can connect and manage your cluster using Microsoft Azure Service Fabric Explorer or Powershell. To connect through Powershell, run ‘Connect-ServiceFabricCluster [

ClusterConnectionEndpoint]’.

6) At this stage, we should see the cluster creation success message with that. We are done with cluster creation and securing them.

7) Now at client side/end user machine where try to browse the secured cluster over IE, we should see dialog prompt asking for certificate. It means, it is working as expected – so far good.

8) Now install the client certificate at your client machine. For simplicity sake, I am using the same machine as Client and Server. But certificate has to be installed under Current User when accessing the cluster over IE.

Certmgr > Current User > Personal > Certificates.

clip_image003

9) Now we are ready to access, browse the cluster url say – https://localhost:19080/, we should see a cert selection dialog displayed.

clip_image004

clip_image005

 

How to create self signed certificate (PFX):- (Optional)

—————————————————————–

1) Open the PS windows > Run this script as .CertSetup.ps1 -Install.

CertSetup.ps1 script present inside the Service Fabric SDK folder in the directory C:Program FilesMicrosoft SDKsService FabricClusterSetupSecure. You can edit this file if you do not wanted certain things in that PS.

2) Export .cer to PFX

$pswd = ConvertTo-SecureString -String “1234” -Force –AsPlainText

Get-ChildItem -Path cert:localMachinemy<Thumbprint> | Export-PfxCertificate -FilePath C:mypfx.pfx -Password $pswd

Precaution:-

  1. How to: Retrieve the Thumbprint of a Certificate
    https://msdn.microsoft.com/en-us/library/ms734695(v=vs.110).aspx
  2. Remove the invisible chars in Thumbprint (User notepad++ > Encoding > Encode in ANSI to reveal the invisible chars) – Don’t use Notepad. http://stackoverflow.com/questions/11115511/how-to-find-certificate-by-its-thumbprint-in-c-sharp
  3. Couple of this PS will help us to remove the cluster or clean the previous installation RemoveServiceFabricCluster.ps1 & .CleanFabric.ps1
  4. Make sure to use PFX and not the cert. Just in case, if you are run into some environment problem during dev stage, it is better to reimage and retry.

Hope this helps. Let me know if you see/need change in this.  

 

2017-01-27 Posted by | Azure, Microservices, ServiceFabric | | Leave a comment

Quick tip on Service Fabric Remoting service development

Azure Service Fabric needs no introduction. It is our next gen PaaS offering or also called PaaS v2. It’s been used internally for many years, tested and released as SDK for consumption. Some of the well known offerings like Az Sql, Az DocDB, Skype etc runs on Service Fabric. We already see developer community consuming for their production and hearing lot of goodness.

It is free, any one can download the SDK, develop and run from their laptop or own data center or publish to Azure. It works on windows and Linux as well. It has lot of rich features over the previous PaaS offerings (cloud services) so seeing lot of traction from big companies considering for critical application.

This sample is based on this example:-https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-communication-remoting/ 

Service side proj settings: Set the platform target as x64 If you want to use reliable collections, reliable actors APIs, failing to have this set throws as binding exception as below.

System.BadImageFormatException was unhandled
  FileName=Microsoft.ServiceFabric.Services, Version=5.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35
  FusionLog=Assembly manager loaded from:  C:WindowsMicrosoft.NETFrameworkv4.0.30319clr.dll
Running under executable  D:Cases_CoderemotingclienttestbinDebugremotingclienttest.vshost.exe
— A detailed error log follows.

 

platform

 

service

 

For client side/calling method, I do not see set up related information in detailed here https://azure.microsoft.com/en-us/documentation/articles/service-fabric-reliable-services-communication-remoting/. I found, these 3 dll’ s has to be referred at client side project for consuming service. I simply copied from service side sample packages folder to calling side proj folder.

image

image

image

client

sample code available – https://1drv.ms/u/s!ApBwDDnGdg5BhNd-KQHtWtaH-sbRcA

2016-11-13 Posted by | .NET, Azure Dev, C#, Microservices, PaaS, ServiceFabric, VS2015 | | Leave a comment

What are Microservices for developers – Part 1

Today’s post is after reading the below book on Microservices and various Service fabric article readings and research. I am planning to break it down into series of blog post explaining Service Fabric and things to learn for developers or architects in coming days hopefully, wish me good luck ;).. Comments welcome.

What is this Microservices and why there is huge fuzz in recent times:-

Microservices are nothing but fine grained SOA offerings. We can say more politely “it does one thing but very well” means without affecting others or waiting something to trigger. It gives lot of freedom for developers more importantly to decide how to use, design, language to choose, platform to run, deploy, scale etc. It decouples from the main deliverable but revolves in satellite mode support with thick coupling.

Let us see one by one:-

1) It help us to develop, deliver as a smaller component services importantly in an agile way. Each of these services can be owned by a smaller team can be fed with 2 Pizza’s team which they call in Amazon (read about 2PT’s) which I picked from this fantastic book.

2) It gives us the freedom to develop and deploy services independently without waiting for the complete release say quarterly release or half yearly release cycle as in monolithic world. The consumer can start seeing the enhancements or features rolled out now and then and enjoy greatly.

3) These Microservices though deployed separately, but it has to be orchestrated together in one way or another to work closely for a bigger applications. It is designed to run as a unique process or service in silos bundled including separate DB to maintain its state values. It communicate with each other services over Web API/Broker messages way of signalling and syncing. By going this way, we make the team into small units owning the end-to end service responsibility focusing and pushing without waiting for a big release day.

4) DevOps role kind of guys comes in here to develop, configure, release, maintain and also monitor these services exposed as an API endpoint or stateless REST API’s or HTTP/XML/TCP/JSON and of course even binary format too. Typical example would be, heavy trafficked eCommerce site like Amazon, the recommendation shown at the bottom of the page would come from one such microservice, payment gateway code logic on another microservice, or themes or catalogue.. so who knows there could be 25+ such microservices supporting the complete portal. These kind of service separation allows the developer guys to evolve quickly and also fix their bugs/changes in real time by updating one after the other. Live example is, amazon.com relies on more than 150+ API’s to render its any of their page.

5) These services can be written in any programming language or using any framework hosted in any platform too. It gives us the choice to decide and roll out quickly with our known languages.

6) It gives us the flexibility to test out these individual services individually for scaling scenarios to sustain black friday sale days kind of sudden burst.

7) As I said earlier, these services do one thing so it has to do well. So having this in mind, developers design them to live and serve for longer time by having newer version side by side. In case of an issue in deployment, we can always redirect to older or newer version in agile way. Most of the steps are carried out through scripts or portals GUI for config changes to act quickly

8) Next important benefit is “resilience“. A typical monolithic application would be deployed in big fat server whereas these microservices would spread across servers or even between different data center also. If one such service goes down, then the request would get served from other available copy to continue the business as usual. It also allow us to “scale out” efficiently with duplicate copies of those critical core services by deploying across servers/VMs/containers. There are open source tools which comes handy to perform these scaling operation. In monolithic design, we can maximum scale up to support the additional load but cannot live in separation- tightly coupled in same basket.

9) List of companies known for using Microservice as an early adaptor are Amazon, Netflix, Twitter, Bluemix, Souldcloud, Google, Azure. In recent times, many many big websites and application has already moved from monolithic to Microservices architecture to reap it benefits. Recently, this area is hotter than Big Data or IoT trend and also players like Google Kubernetes, Mesos/Marathon, Docker Swarm, Amazon ECS, CoreOS Fleet or Rancher, Azure Container Service, Service Fabric etc are there to fill in one or the other form. The Main objective of the Service Fabric is to reduce the complexities in building complex applications using Microservice approach. Definitely Containers and actor based programming model are the examples but as said earlier, this area is going to have more update this year.

10) Microservices are usually designed to function as stateless because stateful is tricky & difficult to design for storing its state maintained. But Service Fabric allows us to build such stateless/stateful service without much effort. Service Fabric is a PaaS system for building highly scalable, distributed applications composed from microservices. It enables an > application runtime, > patching and innovative rolling upgrade > health-model support > all while maintaining local state replication. I will try to cover the rest in next part in detail with hands on. Here is the bunch of links to think through.

Microservices:

  1. Why a microservices approach to building applications? https://azure.microsoft.com/en-us/documentation/articles/service-fabric-overview-microservices/
  2. Virtual Academy: Exploring Microservice Architecture : https://azure.microsoft.com/en-us/blog/virtual-academy-exploring-microservice-architecture/
  3. Provision and deploy microservices predictably in Azure : https://azure.microsoft.com/en-us/documentation/articles/app-service-deploy-complex-application-predictably/
  4. http://www.microservicesweekly.com/ – * highly recommended *

General resources:

  1. Microservices : http://martinfowler.com/articles/microservices.html
  2. Microservices Are Not a free lunch! : http://contino.co.uk/microservices-not-a-free-lunch/
  3. Services, Microservices, Nanoservices – oh my! : http://arnon.me/2014/03/services-microservices-nanoservices/
  4. Micro Service Architecture : https://yobriefca.se/blog/2013/04/29/micro-service-architecture/
  5. Microservice Design Patterns : https://www.voxxed.com/blog/2015/04/microservice-design-patterns/
  6. Microservices: Decomposing Applications for Deployability and Scalability : http://www.infoq.com/articles/microservices-intro

Service Fabric –specific:

  1. Microsoft Azure Service Fabric Architecture : https://channel9.msdn.com/Events/Ignite/2015/BRK3717
  2. Building Resilient, Scalable Services with Microsoft Azure Service Fabric : https://channel9.msdn.com/Events/Ignite/2015/BRK3730
  3. Deploying and Managing Services with Microsoft Azure Service Fabric : https://channel9.msdn.com/Events/Ignite/2015/BRK3478
  4. Service Orchestration with Microsoft Azure Service Fabric : https://channel9.msdn.com/Events/Ignite/2015/BRK3485
  5. Microsoft Azure Service Fabric Actors: The Director’s Cut : https://channel9.msdn.com/Events/Ignite/2015/BRK3476
  6. Microservices: An application revolution powered by the cloud:- https://azure.microsoft.com/en-us/blog/microservices-an-application-revolution-powered-by-the-cloud/
  7. Guide to converting Web and Worker Roles to Service Fabric stateless services – https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cloud-services-migration-worker-role-stateless-service
    Learn about the differences between Cloud Services and Service Fabric before migrating applications. –https://azure.microsoft.com/en-us/documentation/articles/service-fabric-cloud-services-migration-differences/

btw,  As I am writing this, I do see Service Fabric going live or GA (general availability). Worth to check //BUILD2016 events and news about ths same topic for latest happenings https://channel9.msdn.com/Events/Build/2016

happy learning !

2016-04-01 Posted by | Azure, Azure Dev, DevOps, Microservices, PaaS, ServiceFabric | , , , , , | 1 Comment

   

%d bloggers like this: