Algorithms: Array Parity

 

You are given an array (which will have a length of at least 3, but could be very large) containing integers. The array is either entirely comprised of odd integers or entirely comprised of even integers except for a single integer N. Write a method that takes the array as an argument and returns this “outlier” N.

[2, 4, 0, 100, 4, 11, 2602, 36]
Should return: 11 (the only odd number)

[160, 3, 1719, 19, 11, 13, -21]
Should return: 160 (the only even number)

GO

package main

import (
	"fmt"
)

func find_outlier(integers []int32) int32 {
      test_result := integers[0] % 2 + integers[1] % 2 + integers[2] % 2;
      var expected_result int32 = 1
      if test_result <= 1{
         expected_result = 0
      }
      for _, value := range(integers) {
           if  value % 2 != expected_result {
               return value
           }
      }
      return -1
}
func main() {
	fmt.Println(find_outlier([]int32{4,6,7,10}))
	fmt.Println(find_outlier([]int32{160, 3, 1719, 19, 11, 13, -21}))
}

PYTHON

def find_outlier(integers):
    test_bed = sum([integers[0] % 2,integers[1] % 2,integers[1] % 2])
    if test_bed <= 1:
        base_result = 0 #series is mostly even
    else:
         base_result = 1 
    for i in integers:
      if i % 2 != base_result:
         return i
    return None

RUST

 #![allow(unused)] fn find_outlier(integers: &[i32]) -> i32{
    let test_bed: i32 = integers[0] % 2 + integers[1] % 2 + integers[1] % 2;
    let mut expected_result: i32 = 1;
    if test_bed <= 1{
        expected_result = 0;
    }
    for i in integers.iter(){
        if *i != expected_result{
            return *i
        }
    }
    return -1
}
fn main() {
  assert_eq!(find_outlier(&[160, 3, 1719, 19, 11, 13, -21]),160);
}

Algorithm Interview Questions: Square

 

Given an integral number, determine if it’s a square number:
In mathematics, a square number or perfect square is an integer that is the square of an integer; in other words, it is the product of some integer with itself.

Output:

isSquare(-1) returns  false
isSquare(0) returns   true
isSquare(3) returns   false
isSquare(4) returns   true
isSquare(25) returns  true  
isSquare(26) returns  false

Python

import math
def is_square(n):
    upper_index = n
    lower_index = 0
    while upper_index >= 1:
      upper_index = math.ceil(upper_index/2)
      if upper_index**2 == n:
         return True
      elif upper_index**2 > n:
          continue
      elif upper_index**2 < n:
          for index in range(upper_index+1,(upper_index*2)+1):
             if index**2 == n:
                 return True
    return False # fix me

Go

package main

import (
	"fmt"
	"math"
)
func is_square(num int32) bool{
     n := float64(num)
     lower_bound := n
     for lower_bound >= 0 {
          lower_bound = math.Ceil(lower_bound/2)
          if math.Pow(lower_bound,2.0) == n{
             return true
          }else if math.Pow(lower_bound,2.0) > n{
             continue
          }else if math.Pow(lower_bound,2.0) < n{
              for i := lower_bound +1; i < lower_bound * 2; i += 1{
                     if  math.Pow(i,2.0) == n {
                            return true
                     }
              }
              return false
          }
     }
     return false
}
func main() {
	fmt.Println("Hello, playground")
	fmt.Println(is_square(10))
	fmt.Println(is_square(25))
	fmt.Println(is_square(0))
}

Rust

#![allow(unused)]
fn is_square(n: i32) -> bool{
    let mut lower_bound = n;
    while lower_bound >= 0{
      lower_bound = (lower_bound as f64 / 2.0 as f64).ceil() as i32;
      if lower_bound.pow(2) == n{
          return true
      }else if lower_bound.pow(2) > n{
          continue
      }else if lower_bound.pow(2) < n{
          for i in (lower_bound+1..lower_bound*2){
              if i.pow(2) == n{
                 return true
              }
          }
        return false
      }
    }
    return false

}
fn main() {
  assert_eq!(is_square(10),false);
  assert_eq!(is_square(25),true);
  assert_eq!(is_square(0),true);
}

Python: Break and Continue

The break statement, like in C, breaks out of the innermost enclosing for or while loop. Loop statements may have an else clause; it is executed when the loop terminates through exhaustion of the list (with for) or when the condition becomes false (with while), but not when the loop is terminated by a break statement. This is exemplified by the following loop, which searches for prime numbers:

for i in range(3,10,2):
   if i == 7:
     print("I encountered {}".format(i))
     break
else: #This only executes if for loop runs all the way through
  print("I just exited")

One alternative to breaking is using return inner function and this can be quite useful when you are trying to multilevel return.

Detecting A Break

You can use the continue keyword to detect if a break occurred during your loop or not. An example will be the application below;

for x in range(5):
  for  y in range(7):
      if y == 3:
         break #break out of the inner loop
  else:
    '''
       This continue is a way to detect if we did not have a short stop during our loop and finished the loop process
    '''
    continue #The purpose of the continue is to  detect if we did not break out
  #Anything here will only executes if we broke out of the inner loop
  print("Yes, there was a break")

Multi-level break

There are certain situations that will require breaking out of  outter loop from inner loop and this, python does not support natively. You have couple of options as discussed in this stackoverflow thread; ranging from refactoring to functions  using ‘return‘ all the way to using extremes like exceptions.

Useful Links

How to setup kops on AWS

Kops provides a very simple way to setup your kubernetes environment on AWS going as far as taking over the responsiblity of configuring things like DNS and likes. In this example, I am trying to setup kubernetes cluster for my learning environment. Kops has to store the states of your cluster and I will be using s3 bucket for storage in this example. You simply export the name of your cluster and name of your s3 bucket using environmental variable.

$ export NAME=myfirstcluster.k8s.local
$ export KOPS_STATE_STORE=s3://tunde-kops-store
$ kops create cluster --name myfirstcluster.k8s.local --node-count 3 --node-size t2.micro --master-size t2.micro --zones us-west-2a,us-west-2b

The above command will simply create a cluster with 3 nodes of type t2.medium in AZ us-west2a and b using gossip DNS(no route53). The dns part was deducted from the name since it ends in k8s.local. You should get something along

I0926 03:16:55.582697   24396 create_cluster.go:655] Inferred --cloud=aws from zone "us-west-2a"
I0926 03:16:55.584585   24396 create_cluster.go:841] Using SSH public key: /Users/tunde.oladipupo/.ssh/id_rsa.pub
I0926 03:16:57.776544   24396 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-west-2a
I0926 03:16:57.776605   24396 subnets.go:183] Assigned CIDR 172.20.64.0/19 to subnet us-west-2b
Previewing changes that will be made:

I0926 03:17:01.404085   24396 apply_cluster.go:396] Gossip DNS: skipping DNS validation
I0926 03:17:01.440953   24396 executor.go:91] Tasks: 0 done / 69 total; 32 can run
I0926 03:17:02.027609   24396 executor.go:91] Tasks: 32 done / 69 total; 14 can run
I0926 03:17:02.379637   24396 executor.go:91] Tasks: 46 done / 69 total; 19 can run
I0926 03:17:03.052653   24396 executor.go:91] Tasks: 65 done / 69 total; 3 can run
W0926 03:17:03.267565   24396 keypair.go:113] Task did not have an address: *awstasks.LoadBalancer {"Name":"api.myfirstcluster.k8s.local","LoadBalancerName":"api-myfirstcluster-k8s-lo-hqulii","DNSName":null,"HostedZoneId":null,"Subnets":[{"Name":"us-west-2a.myfirstcluster.k8s.local","ID":null,"VPC":{"Name":"myfirstcluster.k8s.local","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"myfirstcluster.k8s.local","Name":"myfirstcluster.k8s.local","kubernetes.io/cluster/myfirstcluster.k8s.local":"owned"}},"AvailabilityZone":"us-west-2a","CIDR":"172.20.32.0/19","Shared":false,"Tags":{"KubernetesCluster":"myfirstcluster.k8s.local","Name":"us-west-2a.myfirstcluster.k8s.local","kubernetes.io/cluster/myfirstcluster.k8s.local":"owned"}},{"Name":"us-west-2b.myfirstcluster.k8s.local","ID":null,"VPC":{"Name":"myfirstcluster.k8s.local","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"myfirstcluster.k8s.local","Name":"myfirstcluster.k8s.local","kubernetes.io/cluster/myfirstcluster.k8s.local":"owned"}},"AvailabilityZone":"us-west-2b","CIDR":"172.20.64.0/19","Shared":false,"Tags":{"KubernetesCluster":"myfirstcluster.k8s.local","Name":"us-west-2b.myfirstcluster.k8s.local","kubernetes.io/cluster/myfirstcluster.k8s.local":"owned"}}],"SecurityGroups":[{"Name":"api-elb.myfirstcluster.k8s.local","ID":null,"Description":"Security group for api ELB","VPC":{"Name":"myfirstcluster.k8s.local","ID":null,"CIDR":"172.20.0.0/16","EnableDNSHostnames":true,"EnableDNSSupport":true,"Shared":false,"Tags":{"KubernetesCluster":"myfirstcluster.k8s.local","Name":"myfirstcluster.k8s.local","kubernetes.io/cluster/myfirstcluster.k8s.local":"owned"}},"RemoveExtraRules":["port=443"],"Shared":null}],"Listeners":{"443":{"InstancePort":443}},"Scheme":null,"HealthCheck":{"Target":"TCP:443","HealthyThreshold":2,"UnhealthyThreshold":2,"Interval":10,"Timeout":5},"AccessLog":null,"ConnectionDraining":null,"ConnectionSettings":{"IdleTimeout":300},"CrossZoneLoadBalancing":null}
I0926 03:17:03.512667   24396 executor.go:91] Tasks: 68 done / 69 total; 1 can run
I0926 03:17:03.609328   24396 executor.go:91] Tasks: 69 done / 69 total; 0 can run
Will create resources:
  AutoscalingGroup/master-us-west-2a.masters.myfirstcluster.k8s.local
  	MinSize             	1
  	MaxSize             	1
  	Subnets             	[name:us-west-2a.myfirstcluster.k8s.local]
  	Tags                	{k8s.io/role/master: 1, Name: master-us-west-2a.masters.myfirstcluster.k8s.local, KubernetesCluster: myfirstcluster.k8s.local}
  	LaunchConfiguration 	name:master-us-west-2a.masters.myfirstcluster.k8s.local

  AutoscalingGroup/nodes.myfirstcluster.k8s.local
  	MinSize             	3
  	MaxSize             	3
  	Subnets             	[name:us-west-2a.myfirstcluster.k8s.local, name:us-west-2b.myfirstcluster.k8s.local]
  	Tags                	{KubernetesCluster: myfirstcluster.k8s.local, k8s.io/role/node: 1, Name: nodes.myfirstcluster.k8s.local}
  	LaunchConfiguration 	name:nodes.myfirstcluster.k8s.local

  DHCPOptions/myfirstcluster.k8s.local
  	DomainName          	us-west-2.compute.internal
  	DomainNameServers   	AmazonProvidedDNS

  EBSVolume/a.etcd-events.myfirstcluster.k8s.local
  	AvailabilityZone    	us-west-2a
  	VolumeType          	gp2
  	SizeGB              	20
  	Encrypted           	false
  	Tags                	{k8s.io/etcd/events: a/a, k8s.io/role/master: 1, Name: a.etcd-events.myfirstcluster.k8s.local, KubernetesCluster: myfirstcluster.k8s.local}

  EBSVolume/a.etcd-main.myfirstcluster.k8s.local
  	AvailabilityZone    	us-west-2a
  	VolumeType          	gp2
  	SizeGB              	20
  	Encrypted           	false
  	Tags                	{k8s.io/etcd/main: a/a, k8s.io/role/master: 1, Name: a.etcd-main.myfirstcluster.k8s.local, KubernetesCluster: myfirstcluster.k8s.local}

  IAMInstanceProfile/masters.myfirstcluster.k8s.local

  IAMInstanceProfile/nodes.myfirstcluster.k8s.local

  IAMInstanceProfileRole/masters.myfirstcluster.k8s.local
  	InstanceProfile     	name:masters.myfirstcluster.k8s.local id:masters.myfirstcluster.k8s.local
  	Role                	name:masters.myfirstcluster.k8s.local

  IAMInstanceProfileRole/nodes.myfirstcluster.k8s.local
  	InstanceProfile     	name:nodes.myfirstcluster.k8s.local id:nodes.myfirstcluster.k8s.local
  	Role                	name:nodes.myfirstcluster.k8s.local

  IAMRole/masters.myfirstcluster.k8s.local
  	ExportWithID        	masters

  IAMRole/nodes.myfirstcluster.k8s.local
  	ExportWithID        	nodes

  IAMRolePolicy/masters.myfirstcluster.k8s.local
  	Role                	name:masters.myfirstcluster.k8s.local

  IAMRolePolicy/nodes.myfirstcluster.k8s.local
  	Role                	name:nodes.myfirstcluster.k8s.local

  InternetGateway/myfirstcluster.k8s.local
  	VPC                 	name:myfirstcluster.k8s.local
  	Shared              	false

  Keypair/kops
  	Subject             	o=system:masters,cn=kops
  	Type                	client

  Keypair/kube-controller-manager
  	Subject             	cn=system:kube-controller-manager
  	Type                	client

  Keypair/kube-proxy
  	Subject             	cn=system:kube-proxy
  	Type                	client

  Keypair/kube-scheduler
  	Subject             	cn=system:kube-scheduler
  	Type                	client

  Keypair/kubecfg
  	Subject             	o=system:masters,cn=kubecfg
  	Type                	client

  Keypair/kubelet
  	Subject             	o=system:nodes,cn=kubelet
  	Type                	client

  Keypair/master
  	Subject             	cn=kubernetes-master
  	Type                	server
  	AlternateNames      	[100.64.0.1, 127.0.0.1, api.internal.myfirstcluster.k8s.local, api.myfirstcluster.k8s.local, kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster.local]

  LaunchConfiguration/master-us-west-2a.masters.myfirstcluster.k8s.local
  	ImageID             	kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-07-28
  	InstanceType        	t2.micro
  	SSHKey              	name:kubernetes.myfirstcluster.k8s.local-7a:d5:3c:74:6e:48:e3:cf:89:cb:f3:60:8e:30:b4:4e id:kubernetes.myfirstcluster.k8s.local-7a:d5:3c:74:6e:48:e3:cf:89:cb:f3:60:8e:30:b4:4e
  	SecurityGroups      	[name:masters.myfirstcluster.k8s.local]
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:masters.myfirstcluster.k8s.local id:masters.myfirstcluster.k8s.local
  	RootVolumeSize      	20
  	RootVolumeType      	gp2
  	SpotPrice           	

  LaunchConfiguration/nodes.myfirstcluster.k8s.local
  	ImageID             	kope.io/k8s-1.6-debian-jessie-amd64-hvm-ebs-2017-07-28
  	InstanceType        	t2.micro
  	SSHKey              	name:kubernetes.myfirstcluster.k8s.local-7a:d5:3c:74:6e:48:e3:cf:89:cb:f3:60:8e:30:b4:4e id:kubernetes.myfirstcluster.k8s.local-7a:d5:3c:74:6e:48:e3:cf:89:cb:f3:60:8e:30:b4:4e
  	SecurityGroups      	[name:nodes.myfirstcluster.k8s.local]
  	AssociatePublicIP   	true
  	IAMInstanceProfile  	name:nodes.myfirstcluster.k8s.local id:nodes.myfirstcluster.k8s.local
  	RootVolumeSize      	20
  	RootVolumeType      	gp2
  	SpotPrice           	

  LoadBalancer/api.myfirstcluster.k8s.local
  	LoadBalancerName    	api-myfirstcluster-k8s-lo-hqulii
  	Subnets             	[name:us-west-2a.myfirstcluster.k8s.local, name:us-west-2b.myfirstcluster.k8s.local]
  	SecurityGroups      	[name:api-elb.myfirstcluster.k8s.local]
  	Listeners           	{443: {"InstancePort":443}}
  	HealthCheck         	{"Target":"TCP:443","HealthyThreshold":2,"UnhealthyThreshold":2,"Interval":10,"Timeout":5}
  	ConnectionSettings  	{"IdleTimeout":300}

  LoadBalancerAttachment/api-master-us-west-2a
  	LoadBalancer        	name:api.myfirstcluster.k8s.local id:api.myfirstcluster.k8s.local
  	AutoscalingGroup    	name:master-us-west-2a.masters.myfirstcluster.k8s.local id:master-us-west-2a.masters.myfirstcluster.k8s.local

  ManagedFile/myfirstcluster.k8s.local-addons-bootstrap
  	Location            	addons/bootstrap-channel.yaml

  ManagedFile/myfirstcluster.k8s.local-addons-core.addons.k8s.io
  	Location            	addons/core.addons.k8s.io/v1.4.0.yaml

  ManagedFile/myfirstcluster.k8s.local-addons-dns-controller.addons.k8s.io-k8s-1.6
  	Location            	addons/dns-controller.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/myfirstcluster.k8s.local-addons-dns-controller.addons.k8s.io-pre-k8s-1.6
  	Location            	addons/dns-controller.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/myfirstcluster.k8s.local-addons-kube-dns.addons.k8s.io-k8s-1.6
  	Location            	addons/kube-dns.addons.k8s.io/k8s-1.6.yaml

  ManagedFile/myfirstcluster.k8s.local-addons-kube-dns.addons.k8s.io-pre-k8s-1.6
  	Location            	addons/kube-dns.addons.k8s.io/pre-k8s-1.6.yaml

  ManagedFile/myfirstcluster.k8s.local-addons-limit-range.addons.k8s.io
  	Location            	addons/limit-range.addons.k8s.io/v1.5.0.yaml

  ManagedFile/myfirstcluster.k8s.local-addons-storage-aws.addons.k8s.io
  	Location            	addons/storage-aws.addons.k8s.io/v1.6.0.yaml

  Route/0.0.0.0/0
  	RouteTable          	name:myfirstcluster.k8s.local
  	CIDR                	0.0.0.0/0
  	InternetGateway     	name:myfirstcluster.k8s.local

  RouteTable/myfirstcluster.k8s.local
  	VPC                 	name:myfirstcluster.k8s.local

  RouteTableAssociation/us-west-2a.myfirstcluster.k8s.local
  	RouteTable          	name:myfirstcluster.k8s.local
  	Subnet              	name:us-west-2a.myfirstcluster.k8s.local

  RouteTableAssociation/us-west-2b.myfirstcluster.k8s.local
  	RouteTable          	name:myfirstcluster.k8s.local
  	Subnet              	name:us-west-2b.myfirstcluster.k8s.local

  SSHKey/kubernetes.myfirstcluster.k8s.local-7a:d5:3c:74:6e:48:e3:cf:89:cb:f3:60:8e:30:b4:4e
  	KeyFingerprint      	f7:8a:21:e9:95:4e:43:93:fb:08:11:40:9c:89:8e:06

  Secret/admin

  Secret/kube

  Secret/kube-proxy

  Secret/kubelet

  Secret/system-controller_manager

  Secret/system-dns

  Secret/system-logging

  Secret/system-monitoring

  Secret/system-scheduler

  SecurityGroup/api-elb.myfirstcluster.k8s.local
  	Description         	Security group for api ELB
  	VPC                 	name:myfirstcluster.k8s.local
  	RemoveExtraRules    	[port=443]

  SecurityGroup/masters.myfirstcluster.k8s.local
  	Description         	Security group for masters
  	VPC                 	name:myfirstcluster.k8s.local
  	RemoveExtraRules    	[port=22, port=443, port=4001, port=4789, port=179]

  SecurityGroup/nodes.myfirstcluster.k8s.local
  	Description         	Security group for nodes
  	VPC                 	name:myfirstcluster.k8s.local
  	RemoveExtraRules    	[port=22]

  SecurityGroupRule/all-master-to-master
  	SecurityGroup       	name:masters.myfirstcluster.k8s.local
  	SourceGroup         	name:masters.myfirstcluster.k8s.local

  SecurityGroupRule/all-master-to-node
  	SecurityGroup       	name:nodes.myfirstcluster.k8s.local
  	SourceGroup         	name:masters.myfirstcluster.k8s.local

  SecurityGroupRule/all-node-to-node
  	SecurityGroup       	name:nodes.myfirstcluster.k8s.local
  	SourceGroup         	name:nodes.myfirstcluster.k8s.local

  SecurityGroupRule/api-elb-egress
  	SecurityGroup       	name:api-elb.myfirstcluster.k8s.local
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/https-api-elb-0.0.0.0/0
  	SecurityGroup       	name:api-elb.myfirstcluster.k8s.local
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	443
  	ToPort              	443

  SecurityGroupRule/https-elb-to-master
  	SecurityGroup       	name:masters.myfirstcluster.k8s.local
  	Protocol            	tcp
  	FromPort            	443
  	ToPort              	443
  	SourceGroup         	name:api-elb.myfirstcluster.k8s.local

  SecurityGroupRule/master-egress
  	SecurityGroup       	name:masters.myfirstcluster.k8s.local
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-egress
  	SecurityGroup       	name:nodes.myfirstcluster.k8s.local
  	CIDR                	0.0.0.0/0
  	Egress              	true

  SecurityGroupRule/node-to-master-tcp-1-4000
  	SecurityGroup       	name:masters.myfirstcluster.k8s.local
  	Protocol            	tcp
  	FromPort            	1
  	ToPort              	4000
  	SourceGroup         	name:nodes.myfirstcluster.k8s.local

  SecurityGroupRule/node-to-master-tcp-4003-65535
  	SecurityGroup       	name:masters.myfirstcluster.k8s.local
  	Protocol            	tcp
  	FromPort            	4003
  	ToPort              	65535
  	SourceGroup         	name:nodes.myfirstcluster.k8s.local

  SecurityGroupRule/node-to-master-udp-1-65535
  	SecurityGroup       	name:masters.myfirstcluster.k8s.local
  	Protocol            	udp
  	FromPort            	1
  	ToPort              	65535
  	SourceGroup         	name:nodes.myfirstcluster.k8s.local

  SecurityGroupRule/ssh-external-to-master-0.0.0.0/0
  	SecurityGroup       	name:masters.myfirstcluster.k8s.local
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  SecurityGroupRule/ssh-external-to-node-0.0.0.0/0
  	SecurityGroup       	name:nodes.myfirstcluster.k8s.local
  	CIDR                	0.0.0.0/0
  	Protocol            	tcp
  	FromPort            	22
  	ToPort              	22

  Subnet/us-west-2a.myfirstcluster.k8s.local
  	VPC                 	name:myfirstcluster.k8s.local
  	AvailabilityZone    	us-west-2a
  	CIDR                	172.20.32.0/19
  	Shared              	false
  	Tags                	{KubernetesCluster: myfirstcluster.k8s.local, Name: us-west-2a.myfirstcluster.k8s.local, kubernetes.io/cluster/myfirstcluster.k8s.local: owned}

  Subnet/us-west-2b.myfirstcluster.k8s.local
  	VPC                 	name:myfirstcluster.k8s.local
  	AvailabilityZone    	us-west-2b
  	CIDR                	172.20.64.0/19
  	Shared              	false
  	Tags                	{KubernetesCluster: myfirstcluster.k8s.local, Name: us-west-2b.myfirstcluster.k8s.local, kubernetes.io/cluster/myfirstcluster.k8s.local: owned}

  VPC/myfirstcluster.k8s.local
  	CIDR                	172.20.0.0/16
  	EnableDNSHostnames  	true
  	EnableDNSSupport    	true
  	Shared              	false
  	Tags                	{Name: myfirstcluster.k8s.local, kubernetes.io/cluster/myfirstcluster.k8s.local: owned, KubernetesCluster: myfirstcluster.k8s.local}

  VPCDHCPOptionsAssociation/myfirstcluster.k8s.local
  	VPC                 	name:myfirstcluster.k8s.local
  	DHCPOptions         	name:myfirstcluster.k8s.local

Must specify --yes to apply changes

Cluster configuration has been created.

Suggestions:
 * list clusters with: kops get cluster
 * edit this cluster with: kops edit cluster myfirstcluster.k8s.local
 * edit your node instance group: kops edit ig --name=myfirstcluster.k8s.local nodes
 * edit your master instance group: kops edit ig --name=myfirstcluster.k8s.local master-us-west-2a

Finally configure your cluster with: kops update cluster myfirstcluster.k8s.local --yes

Now that we have our cluster, lets go ahead and take a look using

$ kops get cluster myfirstcluster.k8s.local
NAME				CLOUD	ZONES
myfirstcluster.k8s.local	aws	us-west-2a,us-west-2b

If we need to make changes to our cluster, we simply run

$ kops edit cluster myfirstcluster.k8s.local

and get something along

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2017-09-26T08:16:57Z
  name: myfirstcluster.k8s.local
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    alwaysAllow: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://tunde-kops-store/myfirstcluster.k8s.local
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-us-west-2a
      name: a
    name: main
  - etcdMembers:
    - instanceGroup: master-us-west-2a
      name: a
    name: events
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.6.7
  masterInternalName: api.internal.myfirstcluster.k8s.local
  masterPublicName: api.myfirstcluster.k8s.local
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
kubernetesVersion: 1.6.7
  masterInternalName: api.internal.myfirstcluster.k8s.local
  masterPublicName: api.myfirstcluster.k8s.local
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.20.32.0/19
    name: us-west-2a
    type: Public
    zone: us-west-2a
  - cidr: 172.20.64.0/19
    name: us-west-2b
    type: Public
    zone: us-west-2b
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

After the changes have been made, finalize and update with with

$ kops update cluster myfirstcluster.k8s.local --yes