patrick-schneider-764934-unsplash

LEADING THE HARDWARE REVOLUTION

What good is a software if it doesn’t have a device that can make the most use of it? Usually, companies create software that is fine-tuned to existing hardware devices such as phones, laptops, or computers. This trend seems to be changing now, especially when it comes to the giant tech companies of the world. They now pursue the Apple approach: creating a software along with a luxury hardware to match.

From smart devices to augmented reality, here are the sleek and ergonomically designed hardware that could change the future as we live it.

Smart Home Devices

A smart home device can control everything from security to surround sound systems and the temperature in a house.

Google Home is now one of the most popular smart home devices, along with Amazon’s Echo. Apple has the HomePod, which is essentially a smart speaker that uses Siri to play music.

Other than the big tech giants, a number of emerging startups have also created smart devices that fulfill a number of functions for the home. For instance, Curb is an energy monitoring device that gives you real-time data on how much energy your home consumes. Ecobee3 is a programmable device that helps to control the temperature depending on whether you’re in the room or not. Similarly, with the Belkin WeMo Switch mobile application, you can turn appliances on or off. Door unlocking systems, garage door openers, mood lighting devices, even a smartwatch that automatically activates the coffee brewing machine when you wake up — a seemingly endless selection of smart devices is driving the hardware revolution of the 21st century.

Augmented Reality Phone

The ZenFone AR is a new phone on the market that uses augmented reality (AR) to increase interactive user experience. It comes with the Google Daydream View headset if you pre-order from Asus. By wearing the headset, you can experience your phone like it’s an interactive hologram computer in a sci-fi movie.

The Asus Zenfone AR is a large, high-end Android device that is noted as a phone that launches with Tango — a feature that can display virtual objects on top of real surroundings — and Google Daydream support. The Zenfone AR has what it takes for good phone VR thanks to its high-resolution OLED screen. It is 5.7 inches across with a resolution of 1440 by 2560. It could mark an important step toward increasing VR integration in everyday hardware devices.

DIY Computer

Winner of the Hottest Hardware startup at the Europas 2018 awards, Kano is a simple do-it-yourself computer building kit that allows children to not only assemble their own computer but also to then use that device to code. Kano has also put out supplementary kits to include a HD touch screen, a DIY camera, speaker, and pixel board. Paired with a storybook, the kits allow children to understand the connection between hardware and software and learning about computing through play.

Translator Earbud

Mymanu’s wireless earbud can translate about 37 languages in real time. Mymanu claims that the Clik’s translation engine, which was developed in-house over four years, is accurate and speedy. It is backed by the Mymanu Translate app that provides a minimalist voice-to-text interface for iOS and Android devices.

A device that is handy for travelers, the Clik could have the potential to increase international trade as well. It looks like it won’t be long before we get a device that is similar in function to the Babel fish from Hitchhiker’s Guide to the Galaxy.

Supercharged Internet

Starry and its various devices provide a service that supercharges Internet speeds up to 1 gigabit per second. Their aim is superfast Internet at 200 megabytes per second speeds. A device called Starry Beam transmits Internet signals across a city, which is then picked up by a receiver called the Starry Point that hangs out your window like an antenna. You can hook up your own route or use the Starry Station, the company’s own Wi-Fi hub that can tell you right from its screen how fast your Internet is performing.

Always Moving Forward

The goal of technology has always been to make our lives easier, and the hardware industry is at the forefront of it. We can expect hardware revolutions in more industries in the years to come. After all, an evolution in software is best supported by hardware that can take on the load. Expect to see a rise in more sophisticated, smarter, and more durable hardware in the futures of all industries.

earth-lights-world-41949

The Future of Big Data

Data is a collection of facts and statistics used for reference or analysis. Businesses are collecting, measuring, reporting, and analyzing data to further understand clients, products, skills, and customers.

Big data is the analysis of large sets of data that reveal certain patterns, trends, and associations related to human behaviour and business interactions. In today’s world, information is continuously being collected, which is why big data analytics has come to the forefront of the IT industry.

Glassdoor, one of Canada’s fastest growing job recruiting sites, released a 2017 report distinguishing the 50 hottest jobs. It identified the overall job score for Data Analysts as 4.8 out of 5, with a job satisfaction score of 4.4 out of 5 and an average base salary of $110,000.

Why is big data so popular in the job market? Because dig data will always be needed in the technological world we live in.

 

The Evolving World of Big Data

 

According to Bernard Marr in his article “17 Predictions about the Future of Big Data Everyone Should Read,” big data has taken over the business world. Data is continually growing and developing, and it will never stop. New information will always come into existence. Here are some of the main predictions formulated by industry experts that Marr believes every big data analyst should keep in mind.

 

  1. New devices with AI technology are expected to make more appearances in the future, including robots, self-driving vehicles, virtual assistants, and smart advisers.
  2. The job market for big data is expected to grow even more, and there will not be enough people to fill all the positions that companies need to fill. Some companies will have to look inward and train existing personnel on big data analytics to fill those spaces.
  3. Another reaction to the shortage of data analysts will be an increase in the use of cognitive computing, which allows computerized models to mimic how humans think and react to information.
  4. New and more intuitive analyst tools will appear on the market, and some of those tools will allow non-analysts to access data. Furthermore, experts believe that programs that can allow users to use data to make decisions in real time will ultimately come out on top.
  5. Simultaneously, more companies may look to buy algorithms rather than attempt to program them. Expect algorithm markets to grow.
  6. Privacy is and will continue to be a huge issue facing big data. The public should expect to see more ethics violations related to data.
  7. More companies will try to use big data to drive up revenue.
  8. More companies will sell not just data but also ready-for-use insights.
  9. Big data, due to its sheer volume, can become too cumbersome. Many businesses have no utility for all of the data that they collect. One day, “fast data” and “actionable data” with specific utilities may replace big data.

The Future of Big Data

Imagine if there was a computer that had the ability to tweak the social and economic constructs of our society; a computer like that could ultimately evolve and mould society to its liking.

The theory about this omnipotent computer is called the Universal Graph.

In mathematics, a universal graph is an infinite graph that contains every finite graph as a subgraph. In simpler terms, it is a graph or network in which a single piece of information can be connected with other bits of information until all finite information pieces are integrated into one single graph. You can think of the Universal Graph like a computer that contains all the information in the world — a “supercomputer” of sorts.

Not only does the theory for a Universal Graph exist, the required technology already exists in the disconnected forms of big data for large companies like Netflix, Google, Facebook, and others.

The Universal Graph is designed to take the information that all these entities possess and put it together in a computational alternate reality of our world. This alternate reality is then subject to formulas that are able to determine large-scale patterns all over the world. It is similar to how big data companies collect and analyze data now but on a universal scale.

The Universal Graph

So, what exactly will this supercomputer include? The possibilities are endless.

A Universal Graph could technically contain information about anything and everything from an animal’s set of genes, a particle, a book, a company, and even an entire person. The Universal Graph interlocks these data points with the rest of the data. For example, the Universal Graph could implement the information from a textbook into a specific gene and splice that gene into a human being who would then carry a gene that allows them to know everything within the contents of the aforementioned textbook.

A Universal Graph may sound like a fantastical science fiction concept, it can become a possibility in the near future.

 

Right now, big corporations are already collecting data about you as an individual. They know your date of birth, college or high school transcripts, shopping habits, social media posts, and even the things you eat.

 

It’s an Information World

 

The world is getting closer and more connected as technology continues to advance, which may or may not be a good thing. What is known is that big data is here to stay, and something like the Universal Graph might have seemed far-fetched 20 years ago has now become a very plausible concept we may soon see. Nothing is for certain about the future. The technological world that we live in is full of surprises. Let’s wait and see what the future of big data has in store.

cyber-security-cybersecurity-device-60504

Importance of Having a Big Data Recovery Strategy

Big data is essentially a large data set that, when analyzed, reveals new patterns or trends that helps with determining human behaviour, preference, and interaction. Public organizations use big data to gain insights about people’s needs and wants to better set up community improvement plans. Many companies rely on big data to help them understand consumer behaviour and predict future market trends. Big data can benefit individuals, too, as information about people’s responses to products and services can in turn be used by consumers to make decisions about what to purchase and what to avoid.

Protecting Corporate Data

As the world’s digital landscape continues to expand and evolve, our ability to efficiently and effectively secure big data is becoming ever more important. In the corporate world in particular, these datasets are essential to assuring that targets are being met and that companies are moving forward in the right direction. Without this information, it would be much harder for organizations to market to the appropriate audience. As big data becomes exponentially more relevant, losing this data equals losing potentially valuable information, which could lead to significant monetary loss and wasted time and resources.

In order for growth to continue, businesses must ensure that their databases are backed-up and are able to be restored in the event of a disaster. With such massive amounts of important information at stake, preventing data loss by having a recovery strategy can be extremely helpful. To create an effective restoration plan, organizations should first determine the processes for data loss avoidance, high-speed recovery solutions, and constant visibility.

Data Loss Prevention

To minimize the chances of losing important information, a key component of securing data is implementing and communicating procedures to prevent data loss situations in the first place. One solution is to simply limit the access of users who do not need big data for their tasks. Minimizing access and keeping track of employees who have access will reduce the chances of individuals potentially erasing, accidentally wiping out, or misusing the data.

Limiting Downtime

Coming up with a high-speed solution that limits downtime is also crucial to ensuring a recovery strategy’s success. For example, establishing a recovery point objective (RPO) time would be quite beneficial. RPO time is the maximum amount of time that a dataset is allowed to be unplugged after an outage or disaster before loss starts to occur. Knowing this will allow authorized employees to work as quickly as possible in the event of an outage to restore data. RPO times can vary depending on the size of the data sets. Keeping in mind whether a business deals with large or small data sets can be helpful in determining an RPO time. Regardless of size, the ultimate goal is to try and reduce the downtime of your data and find the most efficient way to restore your specified data set.

Scheduled Testing and Restoration

Another way to ease the minds of consumers and employees in terms of security is deciding on a testing/restoration frequency. Setting a semi-annual and/or bimonthly testing/restoring schedule can help ensure data sets are accurately updated and adequately protected. More importantly, testing frequency schedules should help indicate whether or not the decided RPO time is accurate. These tests would ultimately conclude that if a disaster or accident were to ever happen, a company’s big data workload would have a chance of surviving.

Know Your Data

As there is a variety of big data environments, recovery strategies can be complex and wide-ranging. Many organizations operate and analyze unique data sets in different ways. Knowing what kind of data set you’re working with and its projected size will aid in producing an effective recovery strategy. It is also important to note that a strong recovery strategy necessitates the data to have constant visibility. Knowing where each dataset is being stored, how to access it, and keeping snapshots of it can provide an easy and efficient solution in case of an outage or disaster. For example, a live replication of a UPS means having a secondary outlet in case of an accident, ensuring that the data is always saved on a second source.

When constructing any replications, it is crucial to have a certified professional present or have trained individuals involved in the process. Although having a live replication does prove to be an attractive solution, as it will ensure a smooth data transfer between devices, an incorrectly built live replication runs the risk of losing data. Therefore, persons involved in the live replication construction should be aware and knowledgeable about the topic and process.

Keep Your Data Safe

Our world is becoming more technologically savvy, and organizations will continue to heavily rely on big data to predict future trends, analyze current patterns, and determine new consumer characteristics. Ensuring that datasets are consistently restored and backed-up means organizations can continue to flourish and improve their services. There are many solutions available to guarantee an efficient and effective recovery strategy. Every organization is unique when it comes to data and how they decide to analyze their own datasets, but one thing that all organizations should have in common is cementing a plan on how to construct, implement, and improve their data recovery strategy.

israel-palacio-778505-unsplash

Component Selection: Considering Energy Efficiency and Long-term Cost

Two key aspects of component selection are cost and energy efficiency. Within corporate business procedures, neglecting either aspect when choosing IT parts can lead to loss of time and capital. With more corporations relying on IT equipment than ever before, the infrastructure that services the end user is vital. Whether they are upgrading an old system or building a new one, businesses must take into account the components’ power efficiency and specifications. They must have the foresight to ensure that the components selected will be both cost-effective and reliable.

As data is becoming larger and more sensitive, businesses have to pay special attention to details such as long-term reliability when selecting components for desktops and servers. For example, when deciding to build or upgrade a machine, the CPU, which is the heart of a computer, is often the most important factor to consider. A CPU’s cost may vary depending on its specifications, and its specifications will directly affect the performance of a system and the applications it can run. Depending on the workload of the end user, one would definitely want to choose the right CPU for the job.

Memory efficiency is an aspect of CPU component selection that is often overlooked, but paying attention to it can potentially save businesses a lot of money in the long run. For example, the speed and size per memory module affect the amount of power per gigabit of memory uses. If the user does not require fast or large capacity memory, then obtaining parts that augment these functions would be considered inefficient and wasteful. A single 8GB module consumes almost half the number of watts as two 4GB modules would, and going from two modules to one saves about $80 per year. When you multiply that amount by the number of machines a business uses over the life of the machine, then you are looking at thousands of dollars in energy reduction and cost savings.

Other components that are important to consider are the fans and chassis. No matter what, all machines heat up during use, but these two components working in conjunction will allow the machine to operate more smoothly. When selecting specific models, decibel levels, airflow specifications, and size are the criteria to keep in mind. Modern fans come in many different sizes and designs, and their speeds can be changed or modified; many chassis now come in a variety of forms and layouts that can either enable or hinder the amount of airflow. Qualified IT consultants are able to utilize environment monitoring tools such as decibel meters and wattage meters to measure sound levels and power consumption of parts to determine which fans and chassis will be the most cost and energy efficient.

One easy way to help businesses choose the right components is to look at efficiency ratings of power supplies. Modern power supplies have an efficiency rating that directly reflects the cost to operate each unit. Platinum- or titanium-rated power supplies, for example, have an approximate 90-96% efficiency requirement over most loads compared to a low-cost unit, which only contains about 50-70% efficiency. With longevity and reliability in mind, these highly rated units will yield huge cost saving benefits. It is also important to note, however, that efficiency ratings are provided at the power supply’s maximum rated load, which does not reflect the real loads that a server/computer would run at. The ability to understand and gauge this information when selecting components is highly beneficial as wasted wattage — in other words, efficiency — is wasted money. By spending the extra money on a more efficient machine, you will be able to collect the cost difference several times over throughout the lifespan of the server (typically 3-6 years depending on usage).

Not only do the right internal components play a big role in cost and energy efficiency, but peripherals such as monitors also need to be considered as well. When looking for energy efficient monitors for an office environment, factors to consider include screen size, resolution, sleep and shut off timers, refresh rate, response time, brightness, and backlighting. On average, ENERGY STAR certified monitors are 25% more energy efficient than standard options. By spending less energy on each monitor, a company can cut costs and invest that saved capital elsewhere.

Next is the power settings of these components. When aiming for a low-cost, energy-efficient environment, it may be helpful if the company can modify its equipment’s internal setting from high performance to energy efficient. In many cases, doing this requires extensive background knowledge of what settings can be modified without damaging components. Components that may benefit from these custom settings include the CPU, memory, and fans, to name a few. Many users have different workloads and preferences, so having a tailored configuration will allow users to maximize their output. Professionals can offer support by determining the wattage output on different loads and implementing best practices to determine the most cost-effective solution for the client’s needs.

A component’s efficiency and configuration correlate directly with its longevity. Improper thermal control caused by incorrect component selection and configuration may cause a multitude of negative cascading effects on a system and threaten a system’s lifespan. When computers and servers heat up, intensive applications often slow down, stop working, or cause the system to shut down altogether. Severe cases of heat transfer can cause thermal expansion of hard disks, resulting in the loss of data and rendering them unusable. When working in a corporate environment, data loss is a major security risk that leads to the loss of production, equipment, and money. Furthermore, components that generate more heat ultimately heat up their surrounding environments as well, and this in turn requires extensive cooling solutions and even potential infrastructure changes for the company, all of which needs money. On average, cooling and ventilation systems for data centres consume about 40% of the total energy used. These are all good reasons why precautions should be taken to select components with cooler alternatives. A good example of this is solid state drives. Not only do solid state drives typically expend less power than hard disk drives, but they are also faster and much cooler to run.

Knowing that data will only continue to grow, it’s crucial that companies take special care when selecting components for their systems. Qualified professionals can help businesses to determine the most efficient and reliable components best suited for a corporation’s specific environment and needs. Sometimes, a component’s initial cost can be easily offset by the money it saves in the long run due to its energy efficiency rating. It’s a delicate juggling act that, if carried out properly, just might lead to major cost savings.

 

References

https://www.corsair.com/ca/en/blog/80-plus-platinum-what-does-it-mean-and-what-is-the-benefit-to-me

https://www.energystar.gov/products/office_equipment/displays

https://searchdatacenter.techtarget.com/tip/Optimizing-server-energy-efficiency

armchair-business-computer-1532621

Optimizing Hardware for GIS

Maps  have been created for ages, but making them traditionally required a lot of time and resources. It was not until the 1960s with the advent of digital geographic information systems (GIS) that we became able to leverage large amounts of geographic data to create maps for various purposes using minimal time. GIS has come a long way since the 60s, but it is still used to answer three primary questions:

  • What do I have?
  • Where is it?
  • How do I use it better?

GIS is a powerful tool; unfortunately, the GIS tools we use often have limitations, whether it’s the software, hardware, or both. To get the best results, it is imperative to optimize your mapping systems and services so that they work efficiently and cohesively with each other.

The most popular solution currently on the market is ESRI’s 32-bit ArcGIS Desktop. ArcGIS Desktop is a powerful software that provides coverage for many aspects of GIS, but as it is limited to 32 bits and not making use of multicore workloads, the software presents many issues when it interacts with the processing capabilities of modern hardware.  One solution is to upgrade to the 64-bit ArcGIS Pro, but the program is still in its infancy and rife with bugs. As well, you may not be ready for the cost of a full system migration.

Luckily, ESRI has produced two features that will allow Desktop users to access the resources that modern computers can provide: background processing and parallel processing. Background processing is a feature that runs geoprocessing tools in a 64-bit process outside of the main ArcGIS software. The main advantage of this is that it allows geoprocessing tools to access more than the 4GB of RAM that 32-bit systems are limited to. The parallel processing feature within ArcGIS provides multicore support, spreading the load of a tool across a number of cores specified by the user (although it is important to ensure that this number is not greater than the number cores available as this could drop performance).

However, one potential issue with this is that not all computers are optimized to take advantage of these features. So how do we optimize our computers to run ArcGIS Desktop? The main components we need to worry about are the processor, memory, and storage. Additional components that can help to increase productivity are a powerful graphics card, reliable power supply, and large monitors.

Memory

RAM is one of the most important components in a system, especially when working with large datasets as it is the fastest form of storage available. To take full advantage of background processing, ESRI recommends having at least 8GB of memory, but when running through large datasets, it can be advantageous to have upwards of, if not more than, 16GB in your workstation and ensure that the system is running on a 64-bit architecture. As for the parallel processing feature, since many modern processors have multiple cores — with the latest Ryzen CPUs from AMD boasting up to 32 cores — it is important to ensure that your workstations are outfitted with multi-core processors running at the highest reasonable clock-speeds.

 

Storage

When processing large datasets, as well as any program that relies heavily upon data retrieval time, the storage medium being used can create a huge bottleneck. ArcGIS is built to reload all map elements each time the display is moved, so it is important to have fast storage to minimize the time that a technician must wait to continue working. Hard drives are too slow for most operations these days and are mostly useful for long-term archiving of data that will not need to be accessed often. In its place, a solid state drive can be one of the best cost-for-performance upgrades one can buy for older systems running on hard drives.

There are a few hardware components that are not as important for the performance of ArcGIS but are still useful for optimizing your computer’s performance.

Graphics Card

A powerful and dedicated graphics card can help take the strain off a CPU. While the gains are not massive, a dedicated GPU will not use the CPU resources that could otherwise be allocated to ArcGIS processing. However, it is important to balance the power of your CPU with the GPU. A powerful GPU will be slowed down by a comparatively underpowered CPU as it cannot provide enough throughput to keep the GPU fed with data. The reverse is also true with an underpowered GPU. A balance of power will prevent bottlenecks and ensure that you are getting the most out of all of your equipment.

Monitors

Incredibly important for productivity are multi-display workstation setups. When using ArcGIS, the larger the display, the better. Multiple displays allow users to view multiple pages, documents, and programs simultaneously, making following templates, instructions, and tutorials easier by removing the need to switch between pages. As well, because ArcGIS reloads all map elements each time the map display is moved or zoomed in/out, a large 4k display allows technicians and analysts to view their work with minimal screen movement. Doing so will ultimately increase productivity.

Power Supply

To save your systems from data loss and corruption, crucial components to have are a reliable uninterruptible power supply (UPS) and power supply (PSU). Losing data is never a good situation to be in, and in the event of a power outage or power surge, you could lose much more than just data. A good UPS will protect the components in a system from failure on top of providing you time to save, backup, and shut down properly. A stable power supply is also critical as imperfect disconnectcan hurt the longevity of components and, in very rare cases, cause a CPU to give wrong outputs on calculations.

There are many things to pay attention to when preparing your system for use in GIS. Start with a sizable amount of RAM to help process large datasets and support it with a high-clock-speed, multi-core, 64-bit CPU to ensure that data throughput is not an issue. Next, a fast SSD to store data locally can cut acquisition time and latency as much as possible. Installing a dedicated graphics card means that resources are not taken away from the CPU unnecessarily. Reliable power systems can help uphold data integrity and prevent loss. Finally, a multi-large-monitor setup will help with efficiency in a user’s day-to-day operation and decrease the time spent waiting for ArcGIS to load spatial data. When all of these methods are applied together, you will have the most efficient mapping services possible, as well as an unhindered geospatial team.