martes, 11 de agosto de 2020

"Same dog, different collar...or hog" they said

                                                             esculturas de Maria Rubinke (15)


A long time has happened since the last post and many promises I had made had been broken, so today I made a shift on my career, but this blog post is not for balm anyone or anything just to share my experience, so here we go:


I left PSBU, a.k.a delivery, or as Ray Budavary said once the "honest work." Hence, it was a kind of weir ride to me since I never had the luck to be named after something fancy, like NSX or anything else, though I took a piece in every crazy delivery from my region. Then I join to the MAPS BU as a SE specialist or TANZU SE Specialist, and I don't believe it, I was close to moving to Goole. Then I took the red pill (or was the blue one?). Anyway, I'm here still alive and with everlasting hungry to learn things, but do not get confused. I went to an excellent process to get here, nothing has been free pass to me. Probably nothing will, so K8s after something coined by a guy in a bar called "the k8s pussy galore" just calm I realized that my proximity from NSX to PKS brings me to the attention on how the networking happens in such a thing like a platform to run apps in terms of containers.


That said, I started to speed my self learning back in 2019, where I find the motivation in things so awesome like eBPF (not just for observability) and even more a long time ago with the PKS now TKGI. Again this was not an easy beast to understand eBPF, so I have to go back to my origins in which Linux Kernel and things like systems need to be recalled in my mind, from there Docker with a couple of new resources and landing in Kubernetes. I was able to create some content for VLZ VMUG advantage tied together NSX and CNA's( MAPS now), so it was a cool ride, but also by being so stubborn, I said to myself I want to surf that wave. I'm starting it but swimming with sharks, there is no such thing as free lunch.


So those who believe in me, thank you so much. For those who don't believe in me, also send you my blesses, because without these 10ish years of experience even from the last 2 days of been PSO, it shows me the things that I'm made of, so thanks for that because without, VCDX design was just unsense document. I'm not a VCDX (I'm in the process of writing a farewell letter to virtual infrastructure :)), so don't report muy blog for being a liar, please, what I'm saying that it is a very costly time effort. With the challenge of complicated global multi-country solution delivery ongoing, it is a big challenge. Also, the lessons learned and the people that I was fortunate to mentor and make them shine, oh boy, that for me was too valuable or are the most precious of this end of the journey.

I hope I can share more things related to technical stuff in the short future, and bother you with my thoughts.



cyas!

martes, 24 de septiembre de 2019

NSX-T the revolution continues : VCAP NV (NSX-T) Resources



Hello there,

during the last couple of months due to my engagements and little battles I take "a break"   to take a piece for the creation of the NSX VCAP Design based on NSX-T so I where there and realize that can do something not just by the seek or posting on everywhere I could but for trying to make things better for rothers, don´t get me wrong here, I just do what I think was the best to improve the exams so for example and here please take notes if you are in the preparation for the exam, this is not 1 then 2, then 3 steps, that is OK for admins and should know for sure, but what about from Designing per se? well  in this you need to THINK about and take care about the datils, think in the forest as somebody say and not just in the three, this can help you to take the best option and get the score, so besides reading the ExamGuide which is something that I strongly badly recommend to you, take a look on the design guides out there on the design references and ask your self WHY on every desition point, In my case as an old man I have to read a lot in order to take me to the action to understand this every time but I put my effort on making it simple, so here a good starting point from Amit Aneja Reference Designs on NVDs, so the special sauce on this is to understand very well the NVDS at all. 


Os this is tip of the iceberg and could sound either a cliche or a kind of abstract thing but if you need some mentorship ping me and will be happy to help you to tackle this goal.

PD. I'm not going to share with you deep secrets of the answers I'm going to share the way to pass it and not just give you the fish cooked.



cheers

martes, 18 de septiembre de 2018

VMware so FaaS with Serverless Dispatch Framework

In my years as a VMware consultant face many times with new things in technology in general to address some problems, others just because I like it, so the combination of both bring me to the Serverless world make myself an extra effort to understand almost everything on it.

Let's get started with and brief introduction, Serverless is a "way" an "style" in consumption in cloud computing, but does not mean no server anymore, on the contrary there is a bunch of servers still virtual and physical but they are abstracted to end consumer, short long story Serverless is an abstraction from the maintenance and all the operations regarding servers.





So FaaS (Functions as a Service) in the other hand is the division or may say the atomic unit of the application in terms of code obviously that can compose a solution, so let me try to explain simple, you VMware Admin are on charge of maintenance of VMware Infrastructure for X company, so they decide to enter in this new wave of Cloud Native Applications because they need to deliver more applications more services to end customers, so that implies that you get enough mature to use the same vSphere substrate to support them so for example using PKS (Pivotal Container Services) which obviously is a layer of abstraction from OSes(Docker) and how they managed/Orchestrated (K8's) ok everything fine to here and with the nature of Claud Native in one word that makes sense the application of FaaS, the ephemeral of existence, so you VMware Admin will have your own GCP on premises something similar like the VCD days but for Cloud Native Apps on PKS, then there is an internal initiative to get all the information from customers almost in real time regarding a satisfaction survey because this is the core of the business, so due to ubiquity of the information the new application to collect the survey will use events as an input  (all answers from end users) and will process them to do something like generate reports or catalog where the end user is geo located, okey there will be a output so here the trick, with the function as a service model, you will no have a VM with a POD (K8's) with a container or a couple of them running all time waiting this input, you will have a bunch of containers running something that creates the sufficient back end infrastructure to process it, so for example you just need in average 1 minute to run the web page with the questions and been answered by the end customer nothing else, then the process will be deleted, or if that is the case you get a big demand will be elastic enough to scale up.





So with this idea in mind, VMware puts on your hands VMware Admin a way to support this new technology in a production grade, a serverless Framework called Dispatch, a note here it is a framework since is a framework around Open FaaS  or can be riff as well, but let's take a look in the architecture to make easy to understand the importance of the Admin role and where to trend in presence of this new solution.








Where to start

 Just follow this path of abstraction from Container as a Service or CaaS, FaaS is an extreme case of CaaS because there is a light line and is all about the application and in this case the code, first in PaaS or platform as a Service the developer does not care about the programing framework, but in back end there is a platform running specific artifacts to make this magic to happen so in plain words the just upload the code some where and they will have running something that executes that code all time they want, in FaaS happens that they just put some piece of that code and the run time is much less, just be aware that you as a VMware admin will have to take in account not the time but the consumption of resources so in order to this function what is called this piece of code to run faster it will demand more vRAM nothing is free.



First since this is a framework on top of K8's it can be place in anywhere that K8's is running and that is a big difference from other solutions, this means that you can have PKS o VKE o GKE acting as IaaS.

Okey here the Dispatch Architecture:






* The IAM which allow you to connect with you Identity Provider something similar to vCenter and leverage of AD as Identity Source, in this case to consume the functions in a mutitenancy manner.
* An Enterprise grade API gateway (KONG) : the front end for API calls  or in this case functions just similar to a Load Distribution but in terms of API calls to different applications in this case functions with ability to authenticate them and secure them.
* A Control plane just like in NSX were basically runs the brains of all the Dispatch controls with micro services like Image Manager, Function Manager, Image Manager, Event Manager, Identity Manager.
* The service Catalog with bridge  between external services and functions
* The event bus or a event/messaging queue
* the FaaS Implementation (Open FaaS or Riff)
* the Open Source Broker API to allow binding functions to services
* the K8's Platform use as IaaS in this case PKS.

And Image Registry for containers in this case VMware Harbor or anything Docker compatible registry
An entity store (Postgres) and K8´s secrets

what is next ?

the next step is to put in action all together in step by step manner, so for that I decide to use PKS as a K8-s Platform and deploy on top of that Dispatch to show a mini how to guide, so first we assume some knowledge of PKS for this, let-s get started.


Part 2 Install PKS
Part 3 Install Dispatch on top of PKS


Stay tune.





 




lunes, 27 de agosto de 2018

Microsegmentation from real practice.

hello World,


Me again trying to re-start the thing that matter the most to me and my blog posting is one of those, so It has been I while since the last post, but I will try not to bluffing and going directly to the action, many shi...things has happened and new things have filled my brain, so let's begin with a simple one: Microsegmentation (using NSX-V obviously), first at this point everyone knows what is about this, what does it mean, and what are the benefits, but going in deep there some points unclear at least form the deployment perspective or consultant perspective I mean by that when you had to have all the answers in front of final consumer of the NSX product.

Let's refer to this customer as ZZZ bank who has the requirement to protect the virtual environment with something that beloved sales team fellows have printed in his mind called microsegmentation, so in essence the ZZZ bank will be enabled to enforce L2-L4 firewall rules and L7 rules with 3 party vendor (Panorama form Palo Alto in this case) at vnic level, enforcement is the how but let's check what about microsegmentation.
First in my conception microsegmentation is the logic to the construction of how this enforcement is placed how the thing is going to be grouped and what rules and from there how what traffic is going to be redirected to 3party services is going to happen.

in practice you have to deploy NSX, then group the VMs in Security Groups then put them security tags to populate SG's and then create FW rules for L3 communications and steering for advance service analysis (packet processing), simple right?? well here is what is going to happen in this example ZZZ bank has a big challenge in define the application, that means they know a sh....nothing about what is the application and how is this connected with others and the intra and intercommunications and tiering and transactionality and so on so forth, so first challenge is going to be addressed by using vRealize Network Insight, presto right?? now we are going to define or check in some way how the VM's are communication each other, no so fast, remember ZZZ bank don't know WTF with this VM's so  if you imagine for a moment a beautiful shine 3tier with DB, WEB, and APP as part of Application X, that is not the case, there is another challenge here VM's will be running the 3er itself just because, nothing else, then VRNI will help to define flows but your permutation for the FW rules will be a madness, so anyway what I want to share is the how to troubleshot or proof you innocense depending on your situation, we do the integration with 3party in this case Palo Alto, yes is another element of this guacamole, in top of a restricted by policy and human superpowers vSphere environment and complex definition of microsegmentation of applications ZZZ bank is looking for the power of inspection in East-West communication, in other words how those Applications are sending packets and what kind of protocols in the communications ports are using.




So for reference using this link to get in touch with this fancy integration between NSX and Palo Alto and uses cases also:


PD. don't forget the web server for the OVA of Palo Alto since I got a chide by one of my fellows in the past about that point but anyway just not sense discussion.

So let's assume this list has already taken place:

  • vSphere is up and running 
  • VRNI is up and running (can be Log Insight which I prefer from cost and simplicity, and only for mapping flows)
  • NSX v is up and running (no controllers)
  • ST's configured and assigned
  • SG's Created and populated
  • the logic of microsegmentation already in place (analysis and definition of FW rules at L3)
  • Panorama is u and Running
  • The VM series are deployed in every ESXi host that is going to use the service


So you need to take the step of "integrate" Panorama and NSX, so for that check that the policy created from panorama is pushed to the NSX manager nothing special and is documented.
At this point you create a simple set of rules in partner security services section inside of NSX FW, the case of ZZZ bank involves a VM called hr-web-02 and the VM called hr-db-01, some explanation here, they are not in overlay or such a demoniac thing (from the physical networking guys quote and it is true) the 2 VMs are in dvPortgroups backend, they are in same network segment this to make more close the real customer environment example, nevertheless the results are the same if you where using VXLAN Logical Switches or if you want to reproduce using the HOL for Palo Alto integration which can give you more hands-on and not be taken as an absolutely true this post so check it here : http://bit.ly/VMware_HOLs_NSX_PA , so the VM of web ip address is 192.168.130.205 and VM of DB ip address is 192.168.130.203







Ok so here is the first point you need to be aware of: No matters if the VMs have NSX FW enforcement, a.k.a. there are FW rules for the VM's at NSX level for restricting or permit communication in presence of Network inspection service (redirect to PA) only if the traffic is not doped at NSX level this will be processed in the 3 party engine in this VM series local Service Virtual Machine in the ESXi host where the VM "lives", so this means for example if WEB VM has an NSX rule (among other rules of course) to permit communication towards a DB VM in port 1433 and then you have to set a redirect rule, for instance, port 3362 from same flow this is going to be steer and evaluated in the SVM according to the rules in Palo Alto (Palo Alto is going to "import" the SG's in Address Groups to create the rules at Panorama level.) otherwise if the traffic is not allowed on this port 3362 and you want to inspect the traffic on Panorama this traffic is not going to steer In summary the allow action in NSX FW rule permits the service insertion.


Second point to consider: sometimes is easy to follow the youtube examples and blog post about microsegmentation in 3tiers for everything using Service Composer, that is cool but remember in this example VMs contains 3tier in same VM so is not so easy to find a common denominator for all the permutations of rules of the firewall, so that will takes you to do it in a manual way, in other words, if this is the condition you can create the NSX redirect rules for traffic manually.




Third point to consider:  there is a big difference undocumented (I did not find it anywhere) about the direction on traffic depending on what are you using for redirect the traffic in VMs, if you are using Service Composer the redirection is in taking place when the traffic gets in the destiny, the other option is when you set the redirection manually the steer if happening in the source, this may be sound something stupid but when you have to check when the infamous PUNT (label that indicated derivation of traffic to SVM or Palo Alto in this case at packet level or the formal definition “Send the packet to a service VM running on the same hypervisor of the current VM.”) is painted at logs level in the ESXi host in order to define if this is happening or not depending on where is VM involve is located.





Destination VM hr-db-01 ESXi esx-02a host




Source VM hr-web-02 ESXi esx-01a host



Final point to consider: Check where is the communication happening is easy to infer but where to show up where is really things happening you need to get int he logs, again from the GUI can see everything is ok but sometimes we are required to define more detail so for that don't get crazy looking in vmkernel.log or somewhere else just go directly an run this logs:


CLI commands for print logs


tail -f dfwpktlogs.log


in this case, since we are using a manual approach the redirection is happening in SVM where destination VM is located in this case the DB VM and clearly, you can see the magic of redirection in the logs level, you can run the command to check log in source and destination host as shown:
logs on the source host


logs on destination



and check as well in NSX manager using the following commands to print logs of static rules been enforced just to check the redirect one applied.

first, select the cluster where the VM is located on


then run the following command to check the ESXi host

locate the VM and run this command

when getting the VM to list the interfaces where Filter are applied in this case Firewall rules 



Finally, check rules been applied to the instantiation of the services



 in this case, we are exploring the DB VM and pay attention in the tag for the rule according to the GUI above (we ser vRay, the second check the ruleset number, in this case, is 652 the corresponds to the NetX API rule IDs



same case for the Web VM





so for second day operations you can save much time by leverage of VRLI since well configured (NSX content pack) is going to receive this kind of logs without the necessity to login in each ESXi host and run any sort of CLI commands, just be aware of the flags lingo for that check this link http://bit.ly/LOG_and_sysEvents_of_NSX.


So you can conclude that complexity depends on what customer, in this case, ZZZ banks wants to do from the business perspective, what sales teams just over the surface deal with the idea of customer and the solution and finally how in reality is going deeper and have the answers to simple questions or what if scenarios.

hope this help in any way to make your life less complicated.


c-ya 


+vRay



miércoles, 1 de junio de 2016

Ask the expert EMC Community Network : intro to NSX

Hi there,



As part of my sense of sharing with all about NSX, yes NSX, last november I participated in EMC communities in the ASK the expert / Pregunte al experto session, all about NSx, as far as I know I don't pretend to be a rockstar it is just to share because I like it, so that is my poor justification about posting in my interaction on that forum, but wait for more this month on same community here is the link below and just enjoy it , as I said, if anything you need to check about it feel free to ping me at the vmware communities on vmware NSX.
in English :


in Spanish:



stay tune...


cya hogs











jueves, 31 de diciembre de 2015

Cross vCenter NSX : Use case customized part 1

-->
Hi there,


In this post I will try to put all my thoughts about a cross VC NSX use case used in conjunction with SRM, has been a time since I don't blog may be because I get lost in many things including work and fighting with reality, anyway I will died doing NSX!!!

So lets do like the business case:

This is a big client that has a need to migrate all the virtual infrastructure vSphere based from site A, site B and site C to site D, D has the capacity to support all workloads, the players on the field are vSphere and Site Recovery Manager at that time, everything was ok and then due to applications are hardcoded with ip address the big client has decided to maintain Ip addressing at any cost during this migration.


So simple ah? not it is not, here are the challenges to be beaten:

Application in every site are like a mystery to deal with, this means even applications sponsors don't know how they are mapped between them, i.e. application X has 300 VM's but it is uncertain who they are and how critical is the relationship, se we are fucked.

In every site is like a "do it yourself" vSphere deployment, this means not all the hardware is the same, and the installer guy of vSphere was thinking in porno when he did the deployment, so in consequence is not homogenous vSphere installation

Networking is a pain in the ass, for say less, so in every single site they have routed networks in same segment presented in all sites, for instance let's say we have a site A and the vSphere admin was required a VM or a bunch, to map the networks it is requires to have in an specific dvPortgroup since it is presented (the required VLAN) at physical segment and with and specific ip address sub-network, then this is just the beginning the administrator has to ask network flows and depending on physical networks a NAT (there is like 6K NAT's by the way) and then security for apps and NLB.


The proposal:

So the main driver is to keep ip address of VM's, right? so in a beautiful world the perfect elegant and awesome solution could be Cross VC NSX + SRM, but lets take a break to check why this will be the use case of years to come in DR and DA.

First lest check what we use to have as a solution for DR from VMware perspective, Site Recovery Manager is a tool that can be used for orchestration of Disaster Recovery, this sounds like fancy but is more easy that it sounds, so here are the features in high level about SRM:

  • In vSphere environments SRM will allow to "be the man in the middle" for Replication mechanisms at disk level, so whatever replication  (already certificated by VMware) SRM will mask instructions to disk arrays to break, copy or clone and reverse replication (please don't hate me I know is more detail but I want to give an Idea).
  • Array replication can be in one direction only, or both.
  • Can be used with vSphere Replication which is the network replication of datastore files for VM's been protected.
  • You can map vSphere objects from protected site to recovery site
  • Can be used as a tool for planned migrations maybe renewal of hardware.
  • Networks can be mapped from source and destination but need to wait till vmtools to "wake up" in order to change ip address or apply a script to change VM behavior or cosmetic changes.


So per se SRM can help IT to have a solid DR schema at low cost due to all the pieces involved are taken in governance by this orchestrator (not VCO don get confuse).



On the other hand what we can do to deal with networking mapping and preserver ip addressing the same during migration of VM's?

Lets see a little bit of NSX Cross VC solution has to offer, so at this point I guess everybody know WTF is SDN and VMware NSX right? Well assuming that, we start to describe this use case of VMware NSX with these capabilities:


  • With Cross VC NSX we are able to extend the Logical Switches (VXLAN based logical L2 switches) in geographical separated sites, for that we need to have 150 ms RTT, and WAN link of at least 1600 MTU.
  • We can have a natural mapping of logical wires with SRM
  • For been a L2 logical switch projected in two sites you can have the same ip address for VM's hanged in this, so if you migrate from site A to site B the VM will preserve ip address!!!
  •  It is possible to have 8 sites in extension


I guess you wonder how in the world L3 is taken in account? So check this and be amazed, VMware NSX installed in both sites (controllers just in one of them) so the same concept of extension of Logical switches applies to the Distributed Logical Router, this means that we have a projection of same DLR in both sites been the same logical router, so wen a packet comes from physical world looking for a VM inside virtual infrastructure (vSphere) this packet is router by Edge Service gateway and passed to U-DLR (U stands for Universal so Universal DLR and Universal LS, Universal Objects in cross VC NSX) this logical router know exactly where is the VM even if this is already migrated to other site and deliver the packed to the VM!!!

I need to check how performance is hammer in this case since there is not a VPN or such doing the extension, we are just doing an ip connectivity communication between sites and MTU over 1550 that's all, so according to me this is for WAN links with high BW and low latency since we required to have 150 ms RTT and vXLAN will have like a 1.5 Gbps of throughput.






That's for now I need to do some work so let me continue this horror story in next post....and please forgive my lack of diagrams but hope to solve that soon...


cya hogs!!