Hello all, I trust everyone’s week is going well. Today we get to hear from Michiel Beenen, the founder of TechConnect, based in the Netherlands. Michiel recently heard about the new Cisco WAP371 and wanted to see if this new 802.11ac wireless access point hit the mark. He has already deployed the WAP321 and WAP561, so the new WAP371 was peaking his interest. Here is what Michiel had to say:
“After many years of working with Cisco Aironet and Small Business devices, it was time for our company to start testing new access point models that would support Wi-Fi AC (higher speeds). But for many smaller companies and even for at home, Aironet products are just a bit to much and require way more knowledge than then you would simply need in a SMB/Home environment.
So after testing out the WAP321 and WAP561 last year we decided to buy a couple of Cisco WAP371 access points and so far the experience has been great. Installation was as easy as plugging it into your network, browsing to the IP of the access point and then following the Wizard.
The wizard will simply ask for a new admin password, Wi-Fi network names and security and if you want to setup a ‘cluster’ of WAP access points. A cluster can be very handy in a lot of cases, when you have more than one access point, a cluster takes care of a lot of things like roaming between access points and automatically updating the most important settings to each access point within a cluster (things like SSID and other settings will be automatically synced between all devices).
After the wizard is finished you get back into the graphical user interface and from there you can basically do whatever you want. Adjust settings like QoS, Radio channels, VLAN but also Guest support (with or without a Portal).
So far we have been testing with multiple iPhones, iPads, Android devices and Macbooks plus some Airplay speakers and it all seems to work perfectly fine.
Speed-wise we have reached speeds of up to 870Mbps so far over AC Wi-Fi and up to 250mMbps on the 2.4GHz band.
All in all, we are very satisfied with this product and happy that Cisco is coming up with products like this for the Small Business and Home users.”
More about TechConnect: TechConnect started in 1997, from the melting pot of several successful tech community web sites. Through the years, it has evolved into an internet company whose focus lies on technology solutions, gaming, music and online advertising. In the beginning, TechConnect was about the passion for technology, and the drive to making dreams come true.
TechConnect’s strength lies in the unique blend of skills and knowledge brought about by the varied and international nature of its employees. Hailing from across Europe, and from all walks of life, TechConnect employees are the lifeblood and heartbeat of the company, with their ideas, varied life experience and professional training. TechConnect helps small and medium sized companies in the Netherlands and Belgium with IT solutions and web services.
Michiel Beenen is the Founder and Managing Director of TechConnect. Internet entrepreneur and online gaming enthusiast, Michiel started the online community (GameConnect) through a combination of his passion for bringing friends together. Michiel has worked on many projects on the cutting edge of online and offline technology, and each has allowed him to build an unparalleled amount of personal connections at all levels of the advertising and gaming worlds, and he loves nothing more than creating new opportunities and projects driven by passion and technical aptitude.
Thank you to Michiel for taking the time out to pen this up for us. Make it a great rest of the week.
Tags: #80211ac, #wireless, branch, edge, ip, network, performance, router, smb, switch, WAP
This post is co-authored by Martin Lee, Armin Pelkmann, and Preetham Raghunanda.
Cyber security analysts tend to redundantly perform the same attack queries with different input data. Unfortunately, the search for useful meta-data correlation across proprietary and open source data sets may be laborious and time consuming with relational databases as multiple tables are joined, queried, and the results inevitably take too long to return. Enter the graph database, a fundamentally improved database technology for specific threat analysis functions. Representing information as a graph allows the discovery of associations and connection that are otherwise not immediately apparent.
Within basic security analysis, we represent domains, IP addresses, and DNS information as nodes, and represent the relationships between them as edges connecting the nodes. In the following example, domains A and B are connected through a shared name server and MX record despite being hosted on different servers. Domain C is linked to domain B through a shared host, but has no direct association with domain A.
This ability to quickly identify domain-host associations brings attention to further network assets that may have been compromised, or assets that will be used in future attacks.
Read More »
Tags: analysis, Big Data, correlation, D3, Domain, edge, fast, Graph, Gremlin, IE, Intelligence, internet explorer, IP address, name server, node, relationships, research, threat, Titan, TRAC, vertex, visual, zero-day
A consequence of the Moore Nielsen prediction is the phenomenon known as Data Gravity: big data is hard to move around, much easier for the smaller applications to come to it. Consider this: it took mankind over 2000 years to produce 2 Exabytes (2x1018 bytes) of data until 2012; now we produce this much in a day! The rate will go up from here. With data production far exceeding the capacity of the Network, particularly at the Edge, there is only one way to cope, which I call the three mega trends in networking and (big) data in Cloud computing scaled to IoT, or as some say, Fog computing:
- Dramatic growth in the applications specialized and optimized for analytics at the Edge: Big Data is hard to move around (data gravity), cannot move data fast enough to the analytics, therefore we need to move the analytics to the data. This will cause a dramatic growth in applications, specialized and optimized for analytics at the edge. Yes, our devices have gotten smarter, yes P2P traffic has become largest portion of Internet traffic, and yes M2M has arrived as the Internet of Things, there is no way to make progress but making the devices smarter, safer and, of course, better connected.
- Dramatic growth in the computational complexity to ETL (extract-transform-load) essential data from the Edge to be data-warehoused at the Core: Currently most open standards and open source efforts are buying us some time to squeeze as much information in as little time as possible via limited connection paths to billions of devices and soon enough we will realize there is a much more pragmatic approach to all of this. A jet engine produces more than 20 Terabytes of data for an hour of flight. Imagine what computational complexity we already have that boils that down to routing and maintenance decisions in such complex machines. Imagine the consequences of ignoring such capability, which can already be made available at rather trivial costs.
- The drive to instrument the data to be “open” rather than “closed”, with all the information we create, and all of its associated ownership and security concerns addressed: Open Data challenges have already surfaced, there comes a time when we begin to realize that an Open Data interface and guarantees about its availability and privacy need to be made and enforced. This is what drives the essential tie today between Public, Private and Hybrid cloud adoption (nearly one third each) and with the ever-growing amount of data at the Edge, the issue of who “owns” it and how is access “controlled” to it, become ever more relevant and important. At the end of the day, the producer/owner of the data must be in charge of its destiny, not some gatekeeper or web farm. This should not be any different that the very same rules that govern open source or open standards.
Last week I addressed these topics at the IEEE Cloud event at Boston University with wonderful colleagues from BU, Cambridge, Carnegie Mellon, MIT, Stanford and other researchers, plus of course, industry colleagues and all the popular, commercial web farms today. I was pleasantly surprised to see not just that the first two are top-of-mind already, but that the third one has emerged and is actually recognized. We have just started to sense the importance of this third wave, with huge implications in Cloud compute. My thanks to Azer Bestavros and Orran Krieger (Boston University), Mahadev Satyanarayanan (Carnegie Mellon University) and Michael Stonebraker (MIT) for the outstanding drive and leadership in addressing these challenges. I found Project Olive intriguing. We are happy to co-sponsor the BU Public Cloud Project, and most importantly, as we just wrapped up EclipseCon 2014 this week, very happy to see we are already walking the talk with Project Krikkit in Eclipse M2M. I made a personal prediction last week: just as most Cloud turned out to be Open Source, IoT software will all be Open Source. Eventually. The hard part is the Data, or should I say, Data Gravity…
Tags: Big Data, core, Data Gravity, Eclipse, edge, Enescu, ETL, Fog computing, IEEE, internet of things, IoT, krikkit, M2M, Moore, Nielsen, Open data, open source, virtualization
True any to any collaboration means you can collaborate via rich media in real time no matter where you are and who you want to collaborate with. You can use the device you want and collaborate the way you want with voice, video, messaging, or content sharing -- imagine never again hearing the phrase “I will take care of that when I get back into the office.”
Cisco is striving to make this vision a reality and has made significant progress. For example, Cisco recently announced capabilities for:
- Mobile and teleworkers: Making voice, video, messaging and content available outside the corporate network to mobile Jabber users and teleworkers without needing a VPN. Best of all, our customers can realize these benefits with no additional costs.*
- Intercompany and consumer collaboration: Enabling real-time voice, video, and data-sharing capabilities for businesses to collaborate with consumers and business partners using Jabber Guest. Customers or partners simply click a URL, website link, or mobile application to start the interaction. Organizations can build these capabilities into their website or mobile application with the included SDKs.
These capabilities are made possible by the Cisco Collaboration Edge Architecture and an important component of this architecture, the newly released Cisco Expressway – they enable bridging of collaboration islands to enable any to any collaboration.
The diagram below shows the use cases that the architecture delivers. Read More »
Tags: Cisco, collaboration, edge, expressway, Jabber Guest, video
Previously I wrote about how we’d won the “Best Carrier Ethernet Aggregation Product” award with the Cisco ASR 9000 System at Carrier Ethernet World Congress in Amsterdam. This momentum continues on the other side of the globe where Alex Zinin, our Chief Technology Officer for Asia Pacific theatre recently accepted several awards at the annual Telecom Asia Readers Choice Awards 2011: Read More »
Tags: ASR9000, carrier ethernet, Cisco, core, edge, IIR, MEF, mpls, Telecom Asia, Zinin