Automated Software Testing the Future?

When we think of AI, we usually think of robots doing certain actions that can replace basic human actions. Sometimes these actions aren’t basic at all. Development timelines have also changed drastically. This Forbes article talks about how AI can potentially take over testing phases due to the shorter time-frames for gold standard releases. With more and more companies releasing software, there will naturally be more competition. Not only is the competition driving this push for AI software testing, but also efficiency. Another thing that drives a push for AI software testing is cost structures. A lot of companies are based on certain cost structures which can heavily impact the quality of a product. With the amount of testing needed for a product to be in working conditions, you may need a lot of people per team to test software. Software testing AI seeks out to have this issue resolved or at least assisted. But, the main reason that AI in software testing is mainly due to narrow constraints of time. Since deadlines are always being set shorter and shorter while standards are being set higher and higher, it seems like the logical solution is to find the easiest and most effective way to combat these restraints

This is an interesting topic because it’s something that you wouldn’t really think would make much sense. If you development an AI to test software, then this could still potentially leave the software vulnerable to bugs. This is due to the fact that there are added variables and “middlemen” added into the equation if you have automated testing. Now, not to say that AI testing is useless, as it is probably extremely efficient at testing a lot of smaller things at once, however it seems like it could be added overhead. New roles would have to be assigned for teams just to test the software for the AI that is testing the software. In concept, AI testing seems like a great idea. However, there are bound to be bugs and whether or not it is worth the risk and cost to implement AI software testing is ultimately up to the companies do it. It is a very unique topic and will be interesting to see just how companies will implement this in the future.

Article: https://www.forbes.com/sites/forbestechcouncil/2018/12/17/ai-in-software-testing-will-a-bot-steal-your-spot/#63c50ce36710

Advertisement

Importance of API Lock-down

In recent times, it seems as though a few companies are letting major security flaws slip through their developer tools. A few days ago, Facebook had a massive 8+ million user exposure due to a flaw in their Photos API. This bug allowed photos of users using certain apps with this API to have their data leaked and pulled without their knowledge. Facebook didn’t realize this until 12 days after this had occurred. Now, Google is in a similar boat. A few days ago, on Monday, Google revealed that it’s Google Plus social media network site (that was in the process of shutting down) also had an API bug that, again, exposed the information of users. This time on a much larger scale. Google’s API exposure hit over 51 million users. Interestingly enough, this happened just two months after they had discovered another bug that exposed data of 500,000 users. This was the initial reason for Google Plus to ultimately shut down. Although Google reported that there was no evidence of this data being “misused”, this information is still out there and the attention will surely seek the eyes of many who will misuse it.

So why is locking down an API so important? An API is a predefined set of tools that are used by developers to perform multiple actions in program. A lot of these are protocols that either pull certain information or perform various tasks. The API is the middle ground of how software can talk to each other. When there is an API bug, this can lead to information that is not suppose to be able to be pulled, get exposed to fields that should not be able to. This was the case with Facebook’s Photo API, and now the case with Google’s Google Plus media API. What we can learn from this is, whenever you are modifying or testing with API’s, always make sure that boundary tests are in place to make sure that certain data can not be pulled by any alternative means. Most of the time, only authorized developers are only permitted access to certain API’s, nullifying any outside attacks. However, if an API does not have the correct stops and security in place, user data is at a massive risk as shown by these two examples of mass user data exposure.

 

Article: https://www.softwaretestingnews.co.uk/17470-2-google-plus-closure-api-bug/

Security Breaches and User Information

User data that is stored on websites is very important to keep secured. Although there are some technicalities when signing up for websites that allow you to upload pictures, movies, or other media, you should always take certain precautions when uploading anything to online. Security breaches are no surprise and with massive social media websites such as Facebook, Twitter, or Tumblr, there are bound to be hackers trying to break in the back end to rob and dump data. Unfortunately, breaches like this aren’t too uncommon. Just recently, Facebook was hit with an enormous security breach. About 6.8 million users’ data have been exposed. With a company as large as Facebook, you would think that their security and software back-ends would be able to block attacks, however this is obviously not the case. So what exactly happened? A bug slipped through in the API, an overlook by the software QA team at Facebook.

Facebook released a statement saying that their Photo API had a very vulnerable bug that let app developers access the photos of over 6 million users. The worst part of it was, the bug wasn’t noticed until 12 days after it had occurred. Not only were the users of Facebook affected, but app developers that utilized Facebook’s Photo API also suffered the consequences of this too. Reportedly, there were over 1500 applications that utilized this API.

What can we learn from this? Software testing is not exclusive to how code or programs run, but it also applies to security. There are teams that are dedicated to only testing for security and backdoors in programming for this reason – so the end user can be confident in their products. One small error in the quality of Facebook’s Photo API caused a major breach with a ton of collateral damage. Over 700 app developers and over 6 million Facebook users were affected by this. Interestingly enough, this isn’t Facebook’s only massive data breach in recent times and their end users are definitely not happy about it. Repeated vulnerabilities like are not good and are detrimental to a software’s future in quality and security. Making sure that you have protocols in place that check for these is very important to avoid these types of situations.

 

Article: https://fossbytes.com/facebook-hit-by-another-security-breach-6-8-million-users-photos-leaked/

Big Companies and Flaws

In software teasing and quality assurance, it is very important that issues be buffed out before release of a product. A lot of companies pour a ton of resources into this and inevitably things still do slip through space. One of the issues developers deal with a lot post launch are security issues. A lot of people may think that security issues are exclusive to smaller companies because they often think that “the bigger the company the better the security”, however, this has been proven wrong time and time again. Throughout Windows 10’s 2018 history, a lot of big updates were released. A lot of these updates actually needed up breaking a lot of things. Let’s go over some of the issues and look at what a little more software testing and quality assurance could have avoided.

Microsoft had been planning to release a massive Windows 10 update in April that added a ton of new features (including security) to their flagship operating system. However, a very bad bug that was causing Windows 10 to spam the blue screen of death was discovered. Microsoft could not release this big update with an issue such as this because it would leave the operating system even more unstable than it already is in its current state. After this issue was fixed, Microsoft was ready to finally ship out the update after a long delay. However, after the update was shipped out, there were over 600 million reports of Google chrome freezing and crashing after the update.

I think the reason that things like this happen is because of rushed deadlines. Sometimes while scheduling updates, there are a list of prioritized tasks that need to be finish in order to meet a deadline. However, in this rushed period bugs and glitches are bound to be overlooked because of the stressful development runs. In this case, Microsoft had to actually take the update offline and rollback the update because their user files were being deleted. From this, we learn that time management is important but also making sure rushed development doesn’t end up making the end users’ quality of products even worse.

Article: https://www.techradar.com/how-to/windows-10-april-2018-update-problems-how-to-fix-them

Live Monitoring and Testing

This article from softwaretestingmagezine.com talks about how testing and monitoring live and active services is a key element of software quality assurance. After deployment, making sure all of the bells and whistles of a service are up-to-date is a very important thing. Not only is it important on the programmer’s end, but it is extremely important on the client side because you should have a smooth experience for the both of you. Without proper testing of a product or service, it is impossible to correctly gauge how it will perform, which is why pre-launch and post-launch immanence testing is a must, especially today. This article then goes into many online services that monitor performance and uptime of certain services. Let’s go into some of these now.

A very important aspect to tracking a service is by recording it’s uptime. A service called “StatusCake” does just this. StatusCake is a paid monitoring service that can monitor page speeds had extremely high rates. They claim to have a very large system for monitoring big servers. Another nice thing about StatusCake is that it can set reminders about domain renewals. SSL monitoring, and much more. Although at may seem like monitoring uptime of your service wouldn’t make sense, it is actually very crucial in many ways. One thing I learned from this article about how important this is, is by monitoring your up time, depending on how long a service is kept online without failing, you can determine by logging where issues lay when something does occur. Something such a service outage or service lag can easily be tracked and tested if you have tools available to help you track it.

Tracking these issues with a system can be tricky, but there is another testing tool that can help us do exactly this. This tool is called Uptrends. Uptrends is another software testing tool that actually notifies you and double checks when something is wrong with your service. One of the harder things is tracking exactly when or where an error in a service occurs. The interesting thing about Uptrends is that it will actually give you detailed reports and statistics on these errors and also sends out email alerts when something goes wrong. This is another very important aspect of software quality assurance and testing. When something goes wrong you need to have information about the failure as fast as possible. With services such as this, you can receive notifications as soon as the fault happens so you can act accordingly to the issue.

Many services are available to help developers and clients for software testing and quality assurance. Depending on what you need, it is very important to keep a close eye on operations after a service is launched or completed, especially if it is being upgraded or modified in any way.

 

Article: http://www.softwaretestingmagazine.com/knowledge/web-hosting-monitoring-services-and-tools/

Round Earth Test Strategy

This article is very interesting in that it offers a new perspective on the importance of a front-end user perspective first type of testing scheme. It starts off by explaining to us the normal pyramid testing scheme and how at the tip of the pyramid is where the user perspective and UI is. This article is contrary to all of those other testing pyramids because, by how this article explains it, the top of the pyramid is just as, if not MORE important than the lower levels. Typically in a Test Automation Pyramid you have Unit or unit tests at the bottom (long base), then you have your service tests (integration, component, and api tests: middle slice), and finally at the top you have your user interface and ideally what the user sees. Knowing that, this article explains how the pyramid should actually be flipped upside-down, having the user perspective be of larger importance. You would still have your unit tests and integration on the bottom and middle, it just wouldn’t be as large. This is the point the article is trying to make, “Just as a triangle has more area in its lower part than its upper part, so you should make more automated tests on lower levels than higher levels.” This is not an argument; this is not reasoning. Nothing in the nature of a triangle tells us how it relates to technology problems. It’s simply a shape that matches an assertion that the authors wanted to make. It’s semiotics with weak semantics.” Pretty much, the article is saying that the shape of the triangle in which these schemes are based on don’t really carry that much weight into technological problems.

My reaction to this article is that I agree with what they are describing here. Similarly to the article, I also think that when you have a project, each layer above the next can often be a lot more complex than compared to the lower levels. This in turn can also even carry a higher risk. The model the author is talking about is the Round Earth model. The round Earth model states that you should think of technology as concentric spheres and that each layer can increase dramatically. This article made me open my eyes and made a lot more sense of how certain models don’t really understand what they even stand for.

 

Article: http://www.satisfice.com/blog/archives/4947

 

 

The Future of Performance Anaylitics

In the future, data analytics are going to be invested in a lot heavier due to the shear amount of information certain companies will need to collect and maintain. This issue is one that not only needs to be solved, but it also needs to have it’s issues prefaced before progression – which is what is hindering it.

This article/guide, it talks about the rapid speeds needed to meet deadlines for “high demand” analytical solutions. It goes into how certain markets are investing in analytical technologies in order to predict the future thus being able to optimally market services. However, the article states that three main factors are causing a great hindrance to this push. These factors are security, privacy, and error prone databases. Not only do these kinds of methods take time, they also need to be secure. Not only to protect mass amounts of data, but to operate as efficiently as possible.

Upon reading this article, what interested me is that North-America accounts for the largest market share due to the growing numbers of “players” in the region. Per the article, a lot of this is being invested for cloud-based solutions. What I found interesting, however, is that this company (Market Research Future), provides research to their clients. They have many dedicated teams devoted to specific fields, which is why they can craft their research very carefully. What I find useful about this posting is that it shows just how important the future of data analytics and organization can be. With the future of data collection, there will need to be more, optimized solutions to handle and control these types of research data.

The content of this posting confirms my beliefs on how cloud computing and cloud based data analysis and testing will continue to grow and evolve rapidly over the coming years. With more and more companies migrating to cloud based systems, not only for internal means, but for client needs as well, we will see a great push in optimized data sorting and faster data transfer. Expansion in cloud computing and web-based services will become the main staple of future products such as this.

Article: http://www.developsense.com/courses.html