Software testing. For a long time, software testing was one of those dark alleys of the software development process. Often ignored, considered as an afterthought, and staffed by “someone else” who did an important job but was outside of the core development process.
Well, that has all changed.
In the SaaS world – especially one governed by continuous delivery – testing is not just an afterthought. It’s a core part of the development process. And like many other engineering processes, there are differing levels of maturity that SaaS development shops can evolve through. In a lot of ways, these different stages of maturity are like Maslow’s hierarchy of human needs. You really, really have to execute on the stuff at the bottom. As you succeed with that you move up to higher levels and achieve greater levels of happiness – in this case through greater quality of software. In the case of testing, each layer is like a filter – each of them catching bugs. The layers at the bottom catch the most basic, easy-to-find bugs. As you go up the stack, the technology helps you catch problems that are rarer and more troubling to identify, reproduce, and fix.
Here is my view on the layers of the hierarchy of SaaS testing needs:
Read More »
Tags: analytics, Cisco, collaboration, SaaS, software, Testing
The video4linux subsystem of the kernel which deals with video capture, video output and hardware video codecs has a very large API with many ioctls, settings, options and capabilities. And most hardware will only use a fraction of that. This makes it hard to test whether your driver implements everything it should and it makes it hard to test if your application supports all hardware variants.
Providing tools that allow you gain confidence about the quality of the code you are writing, whether it is a driver or an application, would be very helpful indeed. As co-maintainer of the subsystem and as part of my job trying to convince the industry to switch to the V4L2 API instead of (Oh no! Not again!) rolling your own API I thought this was a worthy cause to spend time on.
I started writing a utility called
v4l2-compliance to test drivers over 6 years ago, but for a long time it only tested a fraction of the V4L2 API. The test coverage slowly increased over the years but it wasn’t until February this year that it became a really powerful tool when support for testing video streaming was added. Today it has test coverage of around 90% of the API and new V4L2 drivers must pass the
v4l2-compliance tests before they are allowed in the kernel.
One important missing piece in the compliance utility is testing for the various cropping, composing and scaling combinations. The main reason being that it wasn’t always clear in the API what the interaction should be between the various actions. E.g. changing a crop rectangle might require a change to the compose rectangle as well. So should that be allowed or should an error be returned instead? (Answer: yes, that’s allowed). I hope to add support for testing this some time this year.
It would be nice if this could be easily tested with an application and a driver that supports all the various combinations. But no such driver exists, and that brings me to the second part of this post: how do you test an application against the bewildering array of hardware? All too often application developers only test their application against the hardware they own, and so it is likely it will fail miserably when using it with hardware that implements a different subset of the V4L2 API.
The answer to this question is that a virtual V4L2 driver is needed that implements as much of the V4L2 API as is possible and that can be configured in various ways to accurately model real hardware. Today there is a virtual video driver in the kernel called
vivi, but unfortunately that driver doesn’t act at all as real hardware does. And it only supports simple video capture which is just a small subset of the whole API.
In order to resolve this situation I wrote a new driver called vivid: Virtual Test Driver. This driver covers most of the V4L2 API and is ideal for testing your application. Writing this driver was very useful since it forced me to think about some of the dark and dusty corners of the V4L2 API, and some of those corners needed a big broom to clean up. I found a variety of bugs in the V4L2 core and the API documentation just because this driver exercised parts of the API that are rarely if ever used.
I also realized that a driver like this is ideal to emulate hardware that is not yet available and can be used to prototype your upcoming product in the absence of the actual hardware. It’s a logical consequence of the requirement that in order for the virtual video driver to be really useful it has to accurately model hardware.
It also had an immediate beneficial effect on the two ‘golden reference’ utilities that control V4L2 drivers: the command line
v4l2-ctl utility and the GUI equivalent
qv4l2. After all, in order to test whether the
vivid driver works you need applications to test the driver. As a result both utilities improved as more features were added to the driver, which all needed to be tested by those applications. So the driver has already fulfilled its promised to help test and improve applications.
All utilities mentioned in this article are part of the v4l-utils git repository.
If you would like to know more about V4L2 driver and application testing, then attend my presentation on this topic during the upcoming LinuxCon North America in Chicago!
Tags: Linux Kernel, LinuxCon North America 2014, Testing, video4linux
By now you’ve probably heard quite a bit about the newest generation of Wi-Fi, 802.11ac. I’ll save you the gory details, just know it’s about 3x faster than 802.11n and will help to improve the capacity of your network. Jameson Blandford and I were recently guests on the No Strings Attached Show podcast with Blake Krone and Samuel Clements (Click to listen to the podcast).
I wanted to follow up the podcast with a blog to go over considerations for deploying, testing, and tuning 802.11ac.
Considerations for deploying 802.11ac
The first question you’ll want to ask yourself, is, if your switching infrastructure can handle 11ac? The answer probably is, yes. The things to consider are the port speed and power-over-Ethernet (PoE) capabilities. You’ll want the access point to have a gigabit uplink to the switch. Each 11ac access point could potentially dump several hundred megabits per second of traffic onto your wired network. It’s also not a bad idea to have 10 Gig uplinks on your access switches to distribution or your core. If you have even just a couple access points on a single access switch, you may quickly find yourself wishing you had 10 Gig uplinks.
Next you’ll need to consider how you will power the access points. If you are like the majority of our customers, you will use PoE from your switches. While 11ac access points require 802.3at (PoE+) for full functionality, the Aironet 3700 will run happily on standard 802.3af PoE. In fact, it remains 3 spatial-streams on both radios, so performance does not suffer because you have a PoE infrastructure.
Will you deploy 80 MHz channels? Read More »
Tags: 11ac, 11n, 802.11, 802.11ac, 802.11n, access point, Aironet, chanalyzer, cleanair, deploying, Enterprise, gigabit, infrastructure, macbook, metageek, mobility, network, network engineer, networking, omnipeek, performance test, performance testing, podcast, PoE+, Prime Infrastructure, spatial stream, Testing, tuning, wi-fi, wifi, wild packets, wireless, wireshark
Cloud-based computing is being viewed by schools, colleges and universities as an increasingly attractive option for delivering education services more securely, reliably, and economically.
Cisco cloud customer, Electronic Testing Services (ETS), took part in a joint webcast to discuss the economic advantages of cloud computing. If you weren’t aware, ETS hosts the advanced placement exam for students. Their previous infrastructure saw low utilization rates due to once-per-year exams. By using Cisco cloud computing, ETS now sees revenues more closely matching expenses.
Read More »
Tags: Cloud Computing, data center, education, standardized testing, Testing
By Steven Shepard, Contributing Columnist
I sat on a plane the other day with Walter Axe, 99 years old and a happily-retired former telephone company engineer, on the way to see his newest great-granddaughter. During the three-hour flight, Walter regaled me with stories of his life in the Bell System.
He joined the company in 1931, fresh out of the Army. He dug ditches, put up poles (often using teams of horses), ran wire, worked in the switch room, and ultimately ended up in Illinois, where he found himself in, as he describes it, “the best job in the world.” Intrigued, I asked what the job was.
Read More »
Tags: bell system, history, telecommunications, telephone network, Testing