Avatar

June 20th 2017.

Months of preparation had lead to this day. Trucks rolled in at 8 am and began unloading all of our gear, an entire 18 wheelers worth. Things were looking good. By 9 am we had our racks offloaded and ready to be installed. What could go wrong right?

Unfortunately for us, one of the many challenges of setting up a temporary network is dealing with the unpredictable nature of everything outside of that scope. In this particular case it was dealing with delays in the truss installation. Because of the delay and the fact that truss needed to be installed right above where the racks were supposed to be installed, we could not uncrate our gear to be positioned and powered on until the truss install was completed. As the morning carried on, minutes turned to hours before finally at about 1 pm we rolled out our racks from the crates, cabled everything up and powered them on.

Now 4 hours may not seem like a lot of delay but when you only have 3 days to get the entire network ready for 28,000 people, every minute counts.

Shortly after we got all the racks powered on we were then faced with the issue that – even though the truss install was complete – they still had to install a large lighting fixture onto the truss which meant that there were tons of AV workers clustered around the area where we were also trying to work.

At this point we could not wait any longer so the team then split up to check on their respective technologies; checking all of the gear to make sure everything was online and functioning correctly. While the rest of the team worked diligently to ensure functionality, a few of my other teammates and I started working on patching in the distributions.

Getting the distributions online sounds easier than it was. We had three distributions, one each at the Mandalay Bay, MGM and Bellagio. Mandalay Bay had four links while MGM and Bellagio had two links each. The Mandalay Bay links were easy since it was all local, but we sure did have trouble with the some of the other ones. Tracking down where the issue lay was not easy given the number of patches and cassettes/MPO connections between the switches. We used a 5mW 650nm red laser fiber optic visual fault tester to ensure fiber connectivity was good. Even after testing each connection it still wouldn’t work for a couple of links. We ended up cleaning as many fiber ends as we could to finally get them working. We honestly had to use a super scientific method of blowing on the connector and plugging it back in to get it working for one link!!

After looking at how much trouble we went through we had our NOC Automation expert, Jason Davis, create a custom dashboard to monitor Optical Light levels.

I was feeling better now, distributions were online, internet was up (except when the firewall guy took it down for 30 minutes!), we were in a good place. Except when we came in the next day only to find that the guys laying the carpet unplugged the power to an entire DC rack!! That’s right they took down the DC distribution by unplugging not one but two PDU’s. The good news is that we no longer needed to test failover of the DC, someone did that for us and it was a success.

The rest of the days were spent in testing and fine tuning configuration while at the same time getting the access layer online. As to be expected when deploying 500 switches, there were a lot of vlan changes and a lot of port reconfigurations. This year we had a member of the team looking at logs almost full time. Aside from a whole lot of bpduguard errdisables we caught CPU spikes, power supply failures and some interface flaps. We automated part of this process by having a message sent to a spark room each time we saw an errdisable message or when a port change was made. This really helped automate and speed up the process of recovery.

Considering all the testing and troubleshooting we had to do, you would be right in guessing we spent a lot of time sitting at our desks staring at our computers. Normally this wouldn’t be an issue except that it was freezing cold. You heard me right! cold in Las Vegas. A whole lot of us ended up buying sweatshirts at the Cisco store. That’s definitely a lesson learned for next time, we need space heaters!

Lastly but most importantly, the most essential item for surviving the NOC at CLUS is…

Authors

Ryan D'Souza

Technical Marketing Engineer

Technology Experiences

Avatar

It’s time again for our Midyear Cybersecurity Report (MCR), which offers updates on the security research and insights revealed in the recent Annual Cybersecurity Report. The unsettling news at this halfway point in the year is that the bad actors are adding new and sophisticated spins to their exploits. Their aim is not just to attack, but to destroy in a way that prevents defenders from restoring systems and data. We’ve coined a name for adversaries’ new goal: destruction of service (DeOS).

Many of the security trends we explore in the MCR tie to the future emergence of DeOS. For example, attackers are innovating ransomware and DDoS campaigns so that they can seriously disrupt an organization’s networks. By doing so, bad actors also damage the organization’s ability to recover from an attack. In their battle to gain time and space to operate, adversaries remain on the hunt for ways to evade detection, usually by rapidly changing approach when some tactics fail to work. As we explain in the MCR, attackers shift gears by dropping newer tools and going back to old ones – like moving away from exploit kits while shifting to business email compromise (BEC) and social engineering to pull in revenue.

The IoT-DDoS Connection

IoT devices and systems were never designed to protect themselves against cyberattacks, so adversaries are exploiting those myriad of security weaknesses. Naturally, the bad actors have figured out that IoT devices present opportunities to build botnets that can launch DDoS attacks more powerful than we’ve seen in the past by virtue of their prevalence and ease of exploitation. We’ve entered what we’re now calling the “1-TBps DDoS era,” where IoT-driven DDoS attacks can cause wide-reaching attacks with the potential to disrupt the Internet itself.

Malware Evolves

At the same time that they seek out new avenues for launching their campaigns, adversaries are fine-tuning malware, one of the workhorses in their attack toolboxes. Malware is evolving in ways that can help attackers with delivery, obfuscation, and evasion. Some adversaries use malware distribution systems that require users to take positive action to activate the threat. In doing so, they avoid detection, because the malware can’t be identified in a sandbox environment. In addition, some malware authors are developing ransomware-as-a-service (RaaS) platforms that allow adversaries to quickly enter the lucrative ransomware market.

Opportunities for Defenders

Given adversaries’ skill at outwitting defenses, is there good news here? In examining Cisco’s median Time to Detection (TTD), it’s been trending downward from a little more than 39 hours in 2015 to about 3.5 hours for the period from November 2016 to May 2017. This positive trend shows us that defenders are identifying known threats quickly and that attackers are under more pressure than ever to find new tactics to avoid detection. We’re also focusing on ways defenders can address the unique security challenges facing their industries. In a special section of the MCR, we offer more findings from Cisco’s Security Capabilities Benchmark Study, pinpointing how key verticals can reduce complexity in their IT environments and embrace automation.

The key to matching wits with adversaries means understanding every risk in our environment, devoting resources to swiftly responding to threats, and sharing research and ideas across the industry so we’re not in the dark about successful security approaches. Toward that end, we’re grateful that in the MCR, you can read contributions from Cisco partners who’ve generously shared their insights: Anomali, Flashpoint, Lumeta, Qualys, Radware, Rapid7, RSA, SAINT Corporation, ThreatConnect, and TrapX Security.

Learn more by downloading and reading the Cisco 2017 Midyear Cybersecurity Report.

Authors

David Ulevitch

No Longer with Cisco

Avatar

Seven years ago, the team behind Unleashing IT started with a question: “What do IT and LoB professionals really need to know to help their organizations succeed?”

The answer today is the same as it was then: Less hype. More insight.

That’s why we’re committed to bringing you the latest perspectives from those on the frontlines of innovation and from analysts with their fingers on the pulse of business and IT.

Today, I’m excited to announce an all-new redesign for Unleashing IT—one that will help us better support what’s next and give all our current and upcoming content a great home.

It’s a fresh look and feel for the site. Which is fitting, because, at the same time, we’re entering a whole new era of smarter, more intelligent, and more intuitive IT—from the network to the data center to the applications and end-user devices your organization uses every day.

As you seek to put automation, intelligence, data analytics, and other key technologies to use for your business or organization, we’ll be there every step of the way. With new perspectives, real-world case studies, and the facts you need to make smarter strategic choices and investments.

Consider us your guide to a more efficient, agile, and productive organization: from the IT department to your line-of-business end users.

Check out and subscribe (it’s free!) to the new Unleashing IT today.

For more background on our latest article on how ‘Big Data analytics is driving healthcare transformation – With an integrated, automated, massively scalable Vscale Architecture, Inovalon is helping the healthcare industry transition from volume to value’, read Harry Petty’s blog.

Join the conversation on Twitter using #unleashingIT.

Authors

Klaus Schwegler

No Longer with Cisco

Avatar

Enhancing customer experience to unlock the Business Value from Wireless Infrastructure

Wi-Fi is fast becoming an essential commodity, on par with air, food and water. In response, almost every known modern business across the world is attempting to offer free Wi-Fi to visitors. The next big question is “Can we offer a best-of-breed Wi-Fi infrastructure and help customers extract business?”

Ubiquitous digital communication has created a new breed of customers – customers that expect to be connected, empowered and always on. But what’s particularly interesting is that their desire for digital experiences is changing their expectations at physical spaces.

They expect free, fast and legally compliant Wi-Fi access wherever they go, so that they can stay plugged into their social lives, share their experiences, compare prices and find more information. 64% of consumers even expect brands to respond and interact with them in real-time.

This Wi-Fi conversation—including infrastructure and business impact—is shifting toward the consideration of intent and context. A more effective experience should be driven by intent and informed by context. To this point Cisco just introduced a more effective way to network—The Network. Intuitive. And, for years, Connected Mobile Experiences (CMX) has enabled customers to more effectively engage customers by leveraging intent and context.

By investing in the CMX location insights and engagement platform, physical businesses can build connections with customers in ways previously unimaginable like analyzing visitor at-location behavior, inferring their intent, creating brand experiences and engaging visitors with relevant context within a physical space.

And the more personal that engagement, the more effective.

Introducing Cisco CMX Engage: Next-Gen Wi-Fi Analytics and Engagement Platform
Cisco CMX Engage is a location insights and engagement cloud software platform that integrates with Cisco Enterprise Wireless, Cisco Meraki, and Cisco Connected Mobile Experiences to allow organizations to acquire customers, build location based customer personas, and engage directly with customers in real-time.

It delivers value to enterprises across industries including shopping centers, retail, stadiums, airports, and hospitality. Live deployments include InterContinental Hotel group (the largest location based cloud deployment in the world), SAP Center, New York Yankees, Norwegian Football League, Majid Al Futtaim and Westfield Century City among others.

CMX Engage is the first location based cloud platform that is truly enterprise ready having global deployments handling peak traffic of over 10 million transactions per hour, deployed across 3000+ enterprise locations and adding about 30 new locations every day with 24/7 monitoring and end-end SLA.

The experience provided through CMX Engage is personalized and sensitive to what each visitor might want, because it leverages the context of what visitors are doing at that particular moment in time. To match the appropriate offerings to the right customer and their location persona, CMX Engage allows brands to obtain a total 360-degree view of the customer and thus reduces the lines between the physical and digital worlds.

By using CMX Engage, you don’t simply collect data on your customers; you leverage an opportunity to create and sustain a meaningful relationship with your customer. Because the closer you put people to the experiences they care about, the more they engage with your brand and positively impact your customer lifetime value.

Product Features – What Sets CMX Engage Apart 

  • Presence analytics – Across global locations, measure key visitor activities that matter to your business including the number of visitors, dwell time, new vs. repeat visitors, peak traffic hours and more.
  • Multi-channel acquisition – Using Wi-Fi, connect and acquire visitors and analyze their at-location behavior to a few meters. Businesses can then send hyper local marketing messages and notifications to people as they move around the venue in real-time.
  • Dashboard – The centralized, online dashboard provides a real time view of visitor activity, location personas, engagement summary, marketing campaigns and other performance metrics.
  • Proximity rules – Powerful, configurable rules engine defines actionable location based insights and engagements.
  • Engagement platform – CMX Engage allows you to setup automated campaigns in minutes based on behavior and location. You can engage with customers via sms (text) messages, emails, app notifications, and smart captive portals.
  • Data ownership and API integration – Data is owned by the enterprise and compliant with privacy and security standards worldwide. API’s made available can integrate into all types of enterprise systems including marketing cloud, CRM, ERP, loyalty, PoS systems etc.

Two Different Product Packages For Two Different Business Needs
CMX Engage is available in two editions: 1) CMX Engage and 2) CMX Engage Advanced.

In the Engage edition, businesses can connect and acquire customers and also deliver targeted, interactive experiences through smart captive portals. Using profile rules that define in-location personas, businesses can gain deep customer insights.

The Advanced edition consists of the same features as the Engage edition, with the additional capability for businesses to turn insights into action by delivering personalized engagements to visitors through sms, email, app notifications and integrate into their enterprise systems.

To learn more, check us out at www.cisco.com/go/cmxengage. Contact us for a free demo and limited period free trial offer.

Authors

Greg Dorai

Senior Vice President & General Manager

Cisco Switching

Avatar

Unless you live off the grid, you can’t help but notice that virtual sales is playing a larger role in every facet of our lives. In addition to driving the most obvious purchases (i.e. my 10-plus Amazon orders in the last 30 days), it’s also powering buying and selling motions across bigger ticket, higher-consideration items.

Take the car buying process as an example. Late last year a good friend was in the market for a new car. He selected the make, model and must-have accessory packages of the new vehicle before ever stepping foot into the dealership. Instead, he relied heavily on digital content (safety reports, consumer reviews, and feature lists and options) throughout his month-long evaluation process.

All of this time, he never established contact with a dealer until he was ready to personally negotiate and make the purchase. (Disclaimer: he admits to taking two somewhat incognito test drives at dealerships, but otherwise he didn’t engage with the sales staff.)

Ultimately, working with a leading warehouse club, he was matched online with a local dealer authorized to sell his precise (and in stock) vehicle at the club’s specified pricing. It was the easiest and most satisfying car buying experience he said he’s ever had, and it exemplifies virtual sales at its finest.

The above scenario demonstrates a true collaboration across manufacturer, distributor and partner using digital and human-based interactions to deliver a seamless and exceptional buying experience. This same B2C virtual sales approach is increasingly being used in B2B engagement, so much so that in their report, Evolution of the Virtual Sales Rep: Taking the Buyer Experience to the Next Level, IDC predicts that by 2018, 20 percent of B2B sales teams will have gone “virtual.” And that leads me to my next point, which is that virtual sales is a B2B movement that’s here to stay. Here are three reasons why:

  • Digitization has bred savvy, self-empowered buyers who increasingly turn first (if not exclusively) to digital content and channels when evaluating products and services. In response, partners must embrace a content marketing strategy with the sole purpose of supporting the buyer through this process. Consider the value of digital assets that provide an independent prospective (analyst insights, comparison reviews, customer use cases and support forum access), as this type of content can help buyers better navigate their decision making process. In addition, it’s important to digitize pricing and comparative information that was historically held close to the vest, as buyers now expect transparency and will appreciate the convenience of easy access to the information they seek. Remember, the primary goals of your content marketing should not be to sell, but rather to educate, support and provide exceptional service to the buyer.
  • Some high-consideration purchases will never convert digitally, which is why your organization should not rely strictly on digital sales and marketing methods. As illustrated by my friend’s car buying experience, some purchases will always require a level of human intervention, at either the evaluation or transactional phase. Virtual sales merges the necessary digital content and online engagement tactics that self-sufficient buyers prefer, with the human expertise and value-oriented support at the most appropriate time in the buyer’s journey.
  • Five-star service is a must and, in the age of digitization, it may be the sole distinguishing factor between you and the competition. When done right, virtual sales leverages digital assets (third-party research, social networks, buyer profile data and more), along with predictive/prescriptive analytics and automation, to enable you to demonstrate a deep understanding of your customer’s business challenges. It truly is a hybrid approach that relies as much on the personal execution of a skilled sales expert as it does the underlying systems and tools that personalize the virtual buying process. The aim is to move away from mass marketing/selling, or even segmented marketing/selling, to deliver tailored experiences that are created for a market of “one.” Using the knowledge gained from the digital footprints that make up the customer’s post-sale journey, we can more accurately address the precise needs and interests of the individual (and their company) at a specific point in time. When that occurs, amazing things happen.

Make no mistake, virtual sales is reshaping our industry. All across Cisco, we’re working hard to create the digital platforms and digital assets that enable partners to increase profits, expedite sales, and improve customer retention and success. From Lifecycle Advantage to Cisco Impact, SuccessHub and more, our priority is to empower your virtual sales teams, both today and in the future.

Authors

Scott Brown

Senior Vice President

Global Virtual Sales & Customer Success

Avatar

Today, Talos is disclosing several vulnerabilities that have been identified by Portcullis in various software products. All four vulnerabilities have been responsibly disclosed to each respective developer in order ensure they are addressed. In order better protect our customers, Talos has also developed Snort rules that detect attempts to exploit these vulnerabilities.

Vulnerability Details

TALOS-2017-0313 (CVE-2016-9048) ProcessMaker Enterprise Core Multiple SQL Injection Vulnerabilities

TALOS-2017-0313 was identified by Jerzy Kramarz of Portcullis.

TALOS-2017-0313 encompasses multiple SQL injection vulnerabilities in ProcessMarker Enterprise Core 3.0.1.7-community. These vulnerabilities manifest as a result of improperly sanitizing input received in web requests. An attacker who transmits a specifically crafted web request to an affected server with parameters containing SQL injection attacks could trigger this vulnerability. This could allow exfiltration of the database information, user credentials, and in certain configuration access the underlying operating system.

Read more »

Authors

Talos Group

Talos Security Intelligence & Research Group

Avatar

In the previous posts (1 and 2) we talked about how technical innovation (in networking and containers) can address an operational challenge involving different stakeholders within an organisation.

In the next two starting today, we zoom out to see another pattern of dynamics between business and IT: the pressure for a faster provisioning of resources for a new project and some solutions around it.

 

Automation, a first step towards Private Cloud

Everyone is aware of the value of the automation.

Many companies and individual engineers have always implemented various ways to save time, from shell scripts to complex programs and to fully automated IaaS solutions.

It helps reducing the so-called “Shadow IT”, a phenomenon that happens when developers or Line of Business users can’t get a fast-enough response from the IT and rush to a Public Cloud to get what they need. In most cases they will complete and release their project sooner, but sometimes troubles start with the production phase (unexpected additional budget for the IT, new technologies that they are not ready to manage, etc.).

Shadow IT happens when corporate IT is not fast enough

Do you blame them? Developers do their best to provide value to their stakeholders. The complexity of traditional IT organization is the real cause of the problem, and private cloud is a solution to it.

For sure, sometimes we see silos (a team responsible for servers, one for storage, one for networking, one for virtual machines, of course one for security…) and the provisioning of even a simple requests can take too long.

Process inefficiency due to silos and wait time

Pressure on the Infrastructure Managers and Admins

It is safe to say that inefficiency in the company affects the business outcome of every project.
Longer time to market for strategic initiatives, higher costs for infrastructure and people.
Finger-pointing starts, in order to identify who is responsible for the bottleneck.

Efficiency of teams and individuals is questioned and responsibility is cascaded through the organization from project managers to developers, to the server team, to the storage team and finally… the network team being at the end of the chain, if they run out of people to blame 😉

Those on the top (they consider themselves on top of the value chain) believe – or try to demonstrate and I’ve also done it in the past – that their work is slowed down by the inefficiency of the teams they depend on. They try to suggest solutions like: “you said that your infrastructure is programmable, now give me your API and I will create everything I need on demand”.

Of course, this approach could bring some value, but it reduces the relevance of the specialists’ teams that are supposed to manage the infrastructure according to best practices, apply architectural blueprints optimized for the company’s specific business, and know the technology in much deeper detail.

They can’t accept to be bypassed by a bunch of developers that want to corrupt the system playing with precious assets with their dirty hands… 🙂

 

The definitive question is: who owns the automation?

Should it be left to people that know what they need?

Should it be owned by people that know how technology works and at the end of the day are responsible for the SLA including performance, security and reliability that could be affected by a configuration made by others?

By definition the developer is not an expert on security, if he or she can easily program a switch via its REST API to get a network segment, it’s not the same as making sure that traffic is secured and inspected.

In my opinion, and based on the experience shared with many customers, the second answer is the correct one; IT administrators and infrastructure managers should have the responsibility however there are ways to modernize their role and value.

 

Offering a self-service catalog

A first, immediate solution could be the introduction of an easy automation tool like Cisco UCS Director, that manages almost every element in a multi-vendor Data Center infrastructure: from servers to networks to storage to virtualization all with a single dashboard.

But what is more interesting is that every atomic action you can do in the web GUI is also reflected in a task in the automation library, allowing you to create custom workflows lining all the tasks for a process that you want to automate.

Once the automation workflow has been built and validated, it can be used by the IT admin or by the Operations team to save time and ensure consistent outcome (no manual errors). But it can also be offered as a service to all the departments that depend on the IT for their projects.

And they will certainly appreciate the efficiency improvement:)

This important step will bring you towards the implementation of a Private Cloud. As we all know, the mature adoption of a cloud model can be seen as a journey that implies careful change management (technology, but mainly processes and people are affected). Some companies are afraid of the risk associated with change and don’t start that journey.

But once the automation is in place, you can easily define what services you want to offer and what is the governance model that best fits your organization. This can be eventually based on a product like Cisco Prime Service Catalog, or another framework providing a portal for IT Service Management (ITSM).

 

Cloud is about delegation (of task responsibility or ownership of resources): offering those in a self-service catalog does not imply automation necessarily (there could be humans behind the portal).

On the other hand, automation can both offer value to the IT Admin (even in the absence of cloud) and be the enabler for the cloud in the mid term (the cloud portal would delegate the automation engine for the provisioning).

You will find more detail in the second part of this post.

 

Resources

Cisco Prime Service Catalog

http://lucarelandini.blogspot.it/2016/02/governance-in-hybrid-cloud.html

http://lucarelandini.blogspot.it/2016/10/just-1-step-to-deploy-your-applications.html

http://lucarelandini.blogspot.it/2017/01/hybrid-cloud-lessons-learned.html

 

Authors

Luca Relandini

Principal Architect

Data Center and Cloud - EMEAR

Avatar

Segment Routing has crossed the chasm – “Early Adopters” have already embraced it, and now the “Early Majority” is adopting it. Interestingly, at the very same time the industry is surfing the Segment Routing MPLS wave, the next wave is already coming in and will even be more disruptive to the way you design and engineer your network infrastructure – this is the Segment Routing IPv6 (SRv6) wave.

Segment Routing has been designed from the ground up to work in a native IPv6 environment. The core capabilities available with SR MPLS – such as 50ms protection (TI-LFA), Traffic Engineering with SLAs (e.g. low latency, disjointedness, etc.) – are de facto available with SRv6.

But it does not stop here – SRv6 opens the door to an entirely new paradigm: “Network Programming.”

The purpose of this blog is not to dive into the nitty gritty details of how that works, but rather to give you a sense of what’s possible when you construct a network infrastructure that has embedded programming capabilities.

IPv6 packet headers were designed with IPv6 Extension Headers. It did not seem like a big deal initially, but one can quickly realize these extensions can make a huge difference.

SRv6 is actually taking advantage of these Extension Headers by inserting Segment Routing Headers into IPv6 packets. So, every IPv6 packet is now augmented with a list of Segment Identifiers (Segment IDs), that are nothing else than 128-bits IPv6 addresses.

Well, so far, you may think it looks pretty similar to SR MPLS, as in both cases you have a list of Segment IDs used to direct traffic over a specific path through the network. True to the exception of the Segment ID size – 128-bits for SRv6 compared to 32-bits for SR MPLS.

That increase in size is actually changing the whole paradigm! With 128 bits, 4 times 32 bits, you can pack more than mere IP addresses into a Segment ID and hence go beyond routing purposes.

These 128-bits Segment IDs can be used and allocated for different purposes. Let me give you an example here:

  • The first 64 bits can be used to direct traffic to a specific node in the network – the “main body” of the program
  • The next 32 bits can be used to enforce some actions on the traffic – the “function” part
  • The remaining 32 bits can be used to pass some additional information – the “argument” part

For those having any kind of past programming experience, you can easily understand where we are heading here.

 

SRv6 necessitates a mindset shift as SRv6 makes your applications and your network interact in a completely different, new way. It’s no longer about merely routing traffic from point A to point B – SR MPLS had already operated a shift there, as it was bringing the capability to route traffic from point A to point B according to some specific constraints expressed by applications (e.g. SR Traffic Engineering). SRv6 goes one step further by enabling the infrastructure to perform some actions on the applications.

With SRv6, we provide you with a toolkit to program your network infrastructure in a way that was not possible before. Use cases, enabled by this network programming paradigm, are numerous and are still to be fleshed out.

“With the concept of network programming and SRv6, we are now able to take our network to the next level of simplification while adding richer embedded capabilities. Through its extensibility and host stack integration, it provides service and content providers the ability to create true application oriented network behaviors within and across network domain boundaries.” said Daniel Bernier, Senior Technical Architect, Bell Canada.

In March 2017, an “SRv6 Network Programming” draft was published at the IETF, rallying around some major service providers. At the very same time, early hardware implementations can be demonstrated. To see more about that, watch this demo on SRv6 VPN and Traffic-Engineering (TE).

We are leading the pack by working closely with some lead operators and by focusing our efforts on use cases that bring significant and timely benefits to our customers.

Authors

Brendan Gibbs

VP Product Marketing

Avatar

This blog was authored by Paul Rascagneres and Warren Mercer.

Introduction

.NET is an increasingly important component of the Microsoft ecosystem providing a shared framework for interoperability between different languages and hardware platforms. Many Microsoft tools, such as PowerShell, and other administrative functions rely on the .NET platform for their functionality. Obviously, this makes .NET an enticing language for malware developers too. Hence, malware researchers must also be familiar with the language and have the necessary skills to analyse malicious software that runs on the platform.

Analysis tools such as ILSpy help researchers decompile code from applications, but cannot be used to automate the analysis of many samples. In this article we will examine how to use WinDBG to analyse .NET applications using the SOS extension provided by Microsoft.

Read More

Authors

Talos Group

Talos Security Intelligence & Research Group