Have you ever thought Microsoft a company who sells it's products like Windows OS, Visual Studio, Azure and all much more in hundreds and thousands of dollars would produce something for free? At Microsoft as a matter of fact even the Sticky Note is not free.😝

I never imagined Microsoft would launch a product and not license it as their Proprietary Asset. Neither I had never imagined it would invest its time and money on Free and Open Source Technologies. But it surprised the whole world in last few years. If you have observed the milestone of Microsoft, you would be surprised as I'm these days. Not only Microsoft was interested on Open Source Technologies in past couple of years but also it Launched a handful of Tools and made them Open Source.

Sounds like I'm exaggerating, but below are some of the Tools and technologies launched by Microsoft as Open Source. This will also leave you with a question mark "why is such a huge company  so much interested on FOSS and what would be the future surprises for Open Source Enthusiasts and the Microsoft Lovers.

.NET Core

Microsoft released .NET Core as an open source product on November 12, 2014. This was a huge day for .NET migrating from its Proprietary circle to the open source. The runtime as well as the framework libraries were also made open source together with .NET Core.
.NET Core is a modular development stack that is the foundation of all future .NET platforms. It’s already used by ASP.NET 5 and .NET Native.

Visual Studio Code

Visual Studio Code is a source code editor developed by Microsoft for Windows, Linux and macOS. It includes support for debugging, embedded Git control, syntax highlighting, intelligent code completion, snippets, and code refactoring. It is also customizable, so users can change the editor's theme, keyboard shortcuts, and preferences.
Microsoft released Visual Studio Code on November 18, 2015. And to highlight some of the surprising facts, Recently Microsoft is the one with Highest Commit on GitHub. This fact will be surprising to the Open Source Lovers and also to the Microsoft developer community

PowerShell(including Windows PowerShell and PowerShell Core) is a task automation and configuration management framework from Microsoft, consisting of a command-line shell and associated scripting language built on the .NET Framework.

18 August, 2016 Microsoft announced some of the components of PowerShell as Open Source.

Black Duck Integration with Visual Studio

Black Duck are popular for development of automated open-source code management tools. It's on the news that Microsoft is integrating Black Duck's Hub program with Microsoft Visual Studio Team Services (TS), formerly Visual Studio Online, and Team Foundation Server (TFS).

I Just highlighted some of the Open Source Tools that was developed by Microsoft. With these findings that I worked on I'm interested in the milestones of Microsoft. I want to explore what is it going to release next in near future. Also how will it contribute to the Open Source Community and at the same time handle its Proprietary product and huge customer base. So I think I should also write some more blogs on each of these open source tools and all curiosities I have. Being an Open Source Enthusiast, it would be awesome to compare and contrast the features available in these tools. Also I can get an idea on the motive behind Microsoft's investment to the Open Source Tools. 

Stay Tuned for some more blogs.
Reference : 

Flight Mode (Airplane Mode) : On or Off

Traveling by plane has been one medium of faster transportation for few decades. Crossing miles & miles within a minute has made our life really simple. Air Transportation somehow signifies we are quite familiar with the fact of turning our phones into flight mode or airplane mode and turning some of our electronic devices off.

Question may arise why we need to do this stuff especially, during take off and landing. Why electronic device should be turned off during flight? Can the 900$ phone take 90 million dollar jet to the ground and burn it with its own jet fuel, just because

we didn't turn our phone off to the FLIGHT MODE?

Airplane cockpit i.e. the front part of the plane where pilot takes control over the plane is filled of highly sensitive equipment required for navigating, communicating and other requirement for safe flight. All electronic devices have electromagnetic field that can interfere with the radio frequencies. It’s not only the cell phone but there is electromagnetic radiation that emits from every electronic personal devices and interfere with the the plane’s navigation system, communication system. The electronic signal can also interfere with the signals between the plane and radio station during takeoff and landing.

Safety officials worries that the electromagnetic radiation might interfere with this sensitive equipment in the airplane and creates safety issues. Devices like portable DVD found to drift airplane GPS system few degrees (research from few years ago). This thing can be resolved by turning it off.
If you argue with the flight attendant over using cell phone, you could be arrested and taken off the flight. If you intimate the flight attendant, you could be jailed for decades.

Likewise, issue of frequency reuse also happens simultaneously.
The frequency can’t be distributed all over the places with single tower. So the cell phone company establishes tower from place to place to reuse the same frequency. With car driving along and people communicating from office and home, there is a little chance that this frequency will overlap in another area and create problems. 
It wont be hard to make a call when you are on a ground. You just communicate through single frequency with single tower. However during the flight when you are flying 500 miles per hour moving around the signal will reach out in different towers with the same frequency at same time and interfere with the ground base cell phone system. And increases the possibility to interfere with the plane’s equipment.

In America because of the Federal Communication Commission (FCC) one should put their phones with airplane mode on. FCC regulations ban the use of cell phones on planes in order to ‘protect against radio interference to cell phone networks on the ground.”

Present time most of the airplane flight provides on air time. Newer plane technology uses the pico cell which acts like a mini cell tower or base station on board and allows the plane on a flight and tower on the ground to interact normally. Basically, the range of a pico cell is 200m or less which is suitable for the plane on flight. Air France was the first airline to allow the use of cell phone in their planes to mail text make receive call.
“It’s never been proven that a mobile phone signal has interfered with the navigation performance of the aircraft. But just because it’s never happened doesn’t mean it will never happen.”

-A pilot tells Business insider

Above the Clouds: A Berkeley View of Cloud Computing : A SUMMARY


Conventionally cloud can be referred as Service over Internet and hardware, but now the term cloud computing is referred for Software as a service. Cloud is now the demand for all tech giants due to some of the benefits like: decrease in cost of electricity, network bandwidth, operations, hardware availability at very large economic scale. We can make a lot of money, leverage existing investment, define a franchise, become a platform and attack a business incubation, by adapting the cloud computing. Cloud services were not user earlier, it is due to the reliability scale which was not prominent. But now cloud efficiently provides new tech trends and business models included as cloud platform. Cloud these days provide new application opportunities, mobile interactive application and parallel batch processing , which has been the reason for catching the huge mass of cloud service consumers. 


Cloud computing strongly follows the ‘pay-as-you-go’ principle, which enables the feature elasticity providing its vendors the usage based pricing. Cloud vendors provide Virtualized resources as computation models or Virtual Machine, Storage model and Networking model. Cloud is more economic to rely on then other existing alternatives. Cloud computing application is focused on three models: Model of Computation, Storage and Communication. Software Licensing as an alternative cloud provides Pay-for-use Licensing. Conventional alternatives has limited storage but cloud provides invent storage Data-Lock In was the main issue in conventional alternatives whereas cloud provides many standardize APIs for maintaining and manipulating our data. Earlier there were no availability of services to host or run our system, but now cloud providers provide business continuity. Even cloud uses Elasticity to defend against the DDoS attacks. Cloud has improved virtual machine support: Flash memory gang scheduling Virtual Machines. Earlier bugs in Large-Scale Distributed Systems were a major issue, bug in a single hub used to damage the whole system but cloud provides invent debugger that relies on distributed virtual machines which solves the bugs without hampering the system. 


From the cloud provider’s view, the construction of very large data centers at low cost sites using commodity computing, storage and networking uncovers the possibility of selling those resources on a pay-as-you-go model below the costs of many medium sized data centers, while making a profit by statistically multiplexing among a large software startup to build its own data centre as it would for a hardware startup to build its own fabrication line. Cloud specifically focuses on following concerns:
  • Application Software of the future will likely have a piece that runs on clients and a piece that runs in the cloud. The cloud piece needs to both scale down rapidly as well as scale up, which is a new requirement for software systems.
  • Infrastructure Software of the future needs to be cognizant that it is no longer running on bare metal but on virtual machines.
  • Moreover, it needs to have billing system built in from the beginning, as it is very difficult to retrofit an accounting system. 
  • Hardware Systems of the future needs to be designed at the scale of a container (at least a dozen racks ) rather than single rack.

Note: This Article is an exact copy from my Summary Paper which I wrote on Researchgate, a year ago. It was a part of Assignment of my Elective Course Cloud Computing. I thought sharing is good.

Graphene Super Capacitor : Future of dash charging

Today’s technology has push the mobile devices far beyond than its limit. We can possibly imagine how smart phones has made our life comfortable and straightforward. With the ratio of developing smart phones their batteries hasn’t been able to be in the point where they should be. Of course there are dash charging but still they are time consuming and not completely developed as they need to be. And still we are adopting the traditional way of charging, using lithium ion and lithium polymer batteries. 

The concept of Graphene Super Capacitor might be the history changer for tomorrow’s world of fast charging. A battery, high energy storage charges slowly and discharges slowly this ability is energy density. Similarly, capacitor , a low energy storage charges rapidly and discharges rapidly which is the power density. Combining the best of both will lead to fast charging and slow discharging. Super capacitors lies between the concept of this two energy storage methods. 

Eesha Khare, a graduate from Lynbrook High School California developed a super capacitor prototype that charges very rapidly and would last longer for more charging cycles. And was runner up at Intel International Science and engineering Fair (2013) bringing out this prototype. Its still prototype in the current state but proved to be work.

Her prototype could be fully charged within time interval of 20-30 seconds and hold the charge for longer like on other similar devices. It lasts for 10,000 cycles compared to batteries in current world 1000 cycles. Much greater achievement in field of energy storage.

Nano scale and mass scale fabrication hasn’t been practically possible on today’s world. Gradually, scaling up this prototype will lead to powering phones and electric cars. Like increasing the technology the maH of batteries will also certainly increase in the future. 

Imagine plugging your phone in, charging it for about 20 second and of that charge it will work one day or more.



Human brain is a box full of mystery. You never know what new idea may explode at your course of time . And what if that your idea turned out to be so intelligent that can often act and react like human. I am quiet sure reading these line one thing struck on your mind “robot”. Robot is a machine built to carry out some complex task or group of tasks which are programmed. It is one of that example of hard work and dedication of computer scientist since that era of time. And the concept of AI has brought their dream come true. Artificial intelligence has been arguably the most exciting field for robotics. Though its certainly the most controversial that a robot can work in an assembly line . But there's no consensus on whether a robot can ever be that intelligent as that of human brain. Indeed its an challenge for AI world that a man made machine with intellectual abilities as like ours. 

Today I am going to share about a creepy robot that’s fascinating today's world “COZMO”. Cozmo is an artificially intelligent toy truck which also shows the scope and future of AI and robotics. It is an artificially intelligent toy robot unveiled by San Francisco startup Anki- and Tappeiner. Cozmo is software driven that can be connected from device to device through Bluetooth. There’s no data in cloud so Cozmo is secure. It comes with three interactive power cubes, which the robot can recognize and with that you can play game such as quick tap, keep away or just free play. Cozmo is born to be playful; it is charming, mischievous, and unpredictable. It recognizes and remembers you, interacts with you, plays games and gets to know you over time. It combines the best of animation – humor, personality, and emotional connection with robots. 

Powered by advanced robotics, AI, and computer vision it has a brain that processes millions of data per second . It uses technologies such as computer vision, animatronics, motors and AI software. It has three ARM - based microprocessors running on android and iOS. 

Cozmo’s eyes changes color from blue to green when its mood changes. It can yawn, sneeze violently or shiver. The final design is made from 340 parts and it has four motors, a camera and a OLED display. When Cozmo is put on a charger, it goes to sleep and snores.

In this challenging world for AI this robot which can wake up, blink its eye, crawls has given life to the concept of John McCarthy. 

Refrence: venturebeat

Artificial Intelligence in Medicine : Enhancing World of Medicine

Our fathers, forefathers always wanted us to become weather a doctor or an engineer or a scientist or professors. But how can we children of today's generation live within this boundary??? Obviously we want more than this isn’t it? How can we just get satisfied being a scientist or a doctor being precised only in a particular field, rather we want innovate more for today's world. For that huge innovations wont the idea of collaboration be the best? Yes, collaboration has been a sensational example for today's world. In this blog I am going to share one of the tremendous collaboration example.

AIM(Artificial Intelligence in Medicine) is an example of such collaboration between computer scientist and health care professionals. Medical Artificial Intelligence is primarily concerned with the construction of AI programs that perform diagnosis and make therapy recommendations. From the very earliest moment in modern history scientist and doctors alike were captivated by the potential  of such a technology might have in medicine. With intelligent computers able to store and process vast stores of knowledge, the hope was that they would become perfect “doctor in box”, assisting or surpassing clinicians with various challenges in medical science.

AIM system is expected to occupy as many branch as it can in medical treatment. Starting with generating alerts and reminders i.e. an expert system attached to a monitor that can warn of changes in patients condition in less acute circumstances it might scan laboratory test results or drug orders and send reminders or warnings through an e-mail system.

Diagnostic assistance is the next and most important expectation from AIM. When a patient case is complex, rare or the person making the diagnosis is simply inexperienced an expert system i.e AI can help come up with likely diagnosis based on patient data. In the first decade of AIM most research systems are developed to assist clinicians in the process of diagnosis, typically with the intention that it would be used during a clinical encounter with a patient. But it is clear that some of the psychological basis for developing this type of support is now considered to be less compelling given that situation assessment seems to be a bigger issue than diagnostic formulation.

Dxplain is one of the example of one of these clinical decision support system, developed at the Massachusetts General Hospital. It is used to assist in the process of diagnosis, taking set of clinical findings including signs, symptoms, laboratory data and then produces a ranked list of diagnosis. It provides justification for each differential diagnosis and suggests further investigation. This system contain a data base of crude probabilities for over 4500 clinical manifestations that are associated with over 2000 different diseases.

Artificial Intelligence in Medicine (AIM) is enhancing in which interactions with the outside world are not only natural but mandatory. Although the basic research topics in AIM maybe those of artificial intelligence the applied issue touch more more generally on the board field of medical informatics.

TORRENT: Leech, Seed, Peer, Everything To Know About

In simple words, You are a Leecher/Peer until your download completes. You become a Seeder once you have 100% download.

Mostly, all of us are well known what torrent is and where we use it? But, What exactly is torrent? How does BitTorrent work?Why torrent? Is torrent legal? How are large files hosted on the Internet easily?

An American Computer programmer Bram Cohen came up with the concept of sharing a file over the Internet with different hosts. BitTorrent was first released by Bram Cohen back in 2001, but it took two years before the new file-sharing protocol gained a notable audience. In the years that followed millions of torrent files were downloaded and shared billions of times. Bram Cohen designed a peer to peer communication protocol for distributing data over the internet termed as BitTorrent protocol. Peer-to-peer file sharing is different from traditional file downloading. In peer-to-peer sharing, you use a software program (rather than your Web browser) to locate computers that have the file you want.Because these are ordinary computers like yours, as opposed to servers, they are called peers.The BitTorrent protocol divides a large files into small chunks allowing users to download sections of it and to exchange the sections between themselves until the file download completes. This protocol uses less bandwidth from the file's creator. This is a great advantage for its distribution in the long term.

How does BitTorrent work?

Unlike some other peer-to-peer downloading methods, BitTorrent is a protocol that offloads some of the file tracking work to a central server (called a tracker). Another difference is that it uses a principal called tit-for-tat. This means that in order to receive files, you have to give them. With BitTorrent, the more files you share with others, the faster your downloads are. Finally, to make better use of available Internet bandwidth (the pipeline for data transmission), BitTorrent downloads different pieces of the file you want simultaneously from multiple computers.
Its a fairly simple concept, When you download a torrent, you aren't downloading the file from one specific person, but rather from many different sources who share the file, for example; lets say I am downloading a movie from a torrent thats around 300 megabytes, at 200 kb/s speed from 10 different sources, that would mean that the average transfer rate per person is 20 kb/s, so from each person you are downloading the same file (but different pieces of it) at an average 20kb/s.

The thing is that some people have slow Internet, so you made download faster from others, lets say you have 10 different sources but 5 of them are on dial up, and you only receive a max of 5 kb/s, that means the other 175 kb/s would come from the other 5 Internet users who may be using broadband or higher.

Its the same concept for seeding, once you download the file, and you allow to seed it, a lot of people are connect to your computer since you are hosting the file, not just one person, lets say you are seeding a file for 30 people, but your upload speed is at 150kb/s each of them are receiving an average of 5 kb/s upload speed for each person.

Your friends Internet connection impacts the speed of the download, and also He may be getting unlucky and downloading from sources that have slow Internet speeds, (it happens)

Is torrent legal?

As long as the item is copyrighted and you don’t own it and if someone else is the owner then downloading it for free is not legal. But via torrent it is legal. The protocol itself is perfectly legal. Torrent may be primarily used for privacy at the present mainly because of its decentralized nature. However there are many legal uses of BitTorrent, like many Linux distros prefer torrent to push out updates as it reduces the stress on their servers. 

What is seeding?

Seeding is where you leave your BitTorrent client open after you've finished your download to help distribute it (you distribute the file while downloading, but it's even more helpful if you continue to distribute the full file even after you have finished downloading). Chances are that most of the data you got was from seeds, so help give back to the community! It doesn't require much - the client will continue seeding until the torrent is removed (right click the torrent, then hit Remove). Proper practice is to seed until the ratio of upload:download is at least 1.00.

What are seeds, peers and leeches in Torrents' language?

SEEDERS are those who has downloaded the file already or initially only one person who uploads the torrent seeds to others. You may notice that after your download is complete the torrent turns from DOWNLOADING to SEEDING. Seeder is someone from whom you can download a piece of file. Hence they affect the overall availability of file on P2P (Point to Point) network.

PEERS are those who are downloading and uploading at the same time. They do not posses the whole file. They only posses parts of whole. Peer is someone who is involved in file sharing activity. It is a generic term.

LEECHERS are those who don’t have all parts file and are not able to share you the required part of the file. If there are zero seeders it is doubtful that you will ever finish downloading that torrent. Very rarely you can download the whole only by leechers. Leecher is someone who has downloaded a file but is not sharing it back to P2P (Point to Point) network. Hence, overall availablity of file decreases.

In simple words, You are a Leecher/Peer until your download completes. You become a Seeder once you have 100% download.

What is their inside Bit-torrent file?

The address to one or more trackers and information about the files. The tracker is a server that knows which users have the real file.
The basic principle is:
  • Your BitTorrent program, that opens the BitTorrent file, connects to the tracker(s) and gets a list of people who have the file. 
  • Your BitTorrent program connects to the people and request pieces of the file. 
  • You are now also on the list, so any user opening the BitTorrent File after you will get your address as well and can download the pieces from you that you already have downloaded. 

How the first seeding starts in torrent? 

  • You create a torrent using any torrent client 
  • add trackers (to manage a list of all the swarms and peers) 
  • distribute the .torrent file 
  • users (torrent clients) read the .torrent file and then obtain a list of peers and seeders from the trackers by querying the unique hash of that torrent 
  • Before the connections are made to those peers, various information like: total pieces, piece size, names, hierarchy of files etc are saved from the .torrent file 
  • connection setup and downloading starts 

What is the first seeded torrent file?

The Oldest Torrent

The torrent file that has been around for the longest time according to our knowledge is The Matrix ASCII. We already crowned this one the oldest torrent back in 2005, and as of today(Nov 7, 2010) it is still active with a few downloaders and only one seeder.
The torrent file in question was created in December 2003 when sites like isoHunt, The Pirate Bay and were only a few months old and when Facebook and YouTube didn’t yet exist. Thus far, this torrent has survived a mind boggling 2500 days.

What is the largest torrent file?

When we refer to the largest torrent we mean the single .torrent file that downloads the most data, not the size of the .torrent file itself. There are several huge torrent files active at the moment, but the record goes to a torrent with a 746.70 GB collection of all 2010 World Cup soccer matches (~ 6GB per half).


Assistive Domotics: Logical Design of Automated Door in a Smart Home


Assistive technology is the form of home automation which includes assistive, adaptive, and rehabilitative devices for people with disabilities and also includes the process used in selecting, locating, and using them. Microsoft Corporation’s C.E.O. Steve Ballmer once said ”The number one benefit of technology is that it empowers people to do what they want to do.It lets people be creative. It lets people be productive.” The US Census Bureau has projected that by 2010 13% of the population will be 65 or older (Cheek 2005). The bureau has also projected that by 2030 there will be 9 million Americans older than 85. Providing the physical facilities of health and security personnel will be a tough task because the cost will be massive.So automated machines will be the best cost effective alternative.

History of Assistive Domotics

Today every embedded device are automated to some extent in order to ease the users. Nikola Tesla’s design of first remote-controlled boat(1785 A.D) is the first known automated device in history.Mentioning about home automation in the early 1930s World’s Fair models fictional were exhibited in order to excite the spectators. Then the invention of Complex Number Calculator (CNC) in 1940, mouse in 1964, Mac OS in 1984, first wireless system in 1989 are other different development in automation.In 1984, home automation technology spreads to garage doors, security systems, infra-red control , fiber optics and many more. Likewise a separate section named as assistive domotics was started and it emphasized the development of automated appliances for the elderly and disabled people.

Machines for Assistive Automation

A Home Robot

A home robot is a mobile device for moving about, performing tasks such as vacuuming, measuring, communicating, fetching objects etc. This device is useful for the elderly people who have problem with ageing and back pain. Many of their daily activities will be solved by this home robot.

Assistive Bed

Assistive bed is an externally monitored machine special designed for elderly and disabled people with spinal cord disabilities or paralysis.This device functions when the user rests in the bed, the string like structure will expand and contract calculating the mass weighed by it.

Designs in Smart Home

The development of home automation is in its early stage.Many researches in the filed of automation is going on.There are obstacles in building a completely automated system. None of the existing system in the world are completely automated by itself. The main reason behind it is machine needs external factors like human, electricity to begin its function.

Design of Automated Door

A security code must be entered in order to open the door and enter inside the house. Same scenario is represented below with the help of context diagram and transition diagram.

State Definition
Here automated door is deterministic in nature and the system consists
of three states: 
Q = MainDoor,PasswordCheck,MainHall,
two input symbols authorized as ’1’ and unauthorized as ’0’ i.e. ∑= 0,1
starting state q0 is MainDoor i.e. q0 = MainDoor
and final state F is MainHall i.e.  FMainHall
A transition table of illustrating the states involved while entering to main
hall is represented below.

Context Diagram
Below is the context free diagram which clearly shows the work mechanism of an automated door:

Transition Table
Transition Table of automated door as per the context diagram is shown below:

Transition Diagram
Transition diagram of the automated door is shown below:

This research was conducted as a partial fulfillment of course "Automata and Formal Languages". The topic was chosen seeing the potential of automation in houses and business complex. Now with this logical design we can simulate the result using functional/procedural programming languages like LISP and PROLOG for convenience. I've also designes another logical implementation of the movement of wheel chair within a smart house. This can be seen in the reference paper.


Assistive Domotics: Logical Design of Automated Home and Movement of Wheelchair in a Smart Home, Sanjog Sigdel,Published on: Research Gate,  [accessed Jan 29, 2017].


G. Marconi, an Italian inventor, unlocks the path of recent day wireless communications by communicating the letter ‘S’ along a distance of 3 Km in the form of three dot Morse code with the help of electromagnetic waves. After this inception, wireless communications have become an important part of present day society. Since satellite communication, television and radio transmission has advanced to pervasive mobile telephone, wireless communications has transformed the style in which society runs. The evolution of wireless begins here and is shown in Fig. 1. It shows the evolving generations of wireless technologies in terms of data rate, mobility, coverage and spectral efficiency. As the wireless technologies are growing, the data rate, mobility, coverage and spectral efficiency increases. It also shows that the 1G and 2G technologies use circuit switching while 2.5G and 3G uses both circuit and packet switching and the next generations from 3.5G to now i.e. 5G are using packet switching. Along with these factors, it also differentiate between licensed spectrum and unlicensed spectrum. All the evolving generations use the licensed spectrum while the WiFi, Bluetooth and WiMAX are using the unlicensed spectrum. An overview about the evolving wireless technologies is below:

From Generation 1G to 2G and from 3G to 4G, this world of telecommunication has seen a number of improvements along with improved performance with every single passing day. This fast revolution in mobile computing changes our day to day life that is way we work, use, interact, learn, implement etc. Until now we have achieved 4G technology and yes the smartphones manufactured now a days are a featured example of this generation. The underdeveloped countries such as Nepal itself had not been able to use of this generation unlike other countries where the research and innovation towards the next generation has already been on its way. All we can say is that 4G is a significant achievement for the further development of the 5G technology.

Now, let’s dig up what 5G is all about. The 5G aims to achieve the vision of essentially unlimited access to information and sharing of data anywhere and anytime for anyone and anything by providing affordable broadband wireless connectivity at very high speed (higher than 1 Gbps, 4G offers the wireless internet connectivity of about 1Gbps). The in use EDGE technology in 3G devices and now, Wi-fi using LTE in 4G will ultimately be developed into WWWW( World Wide Wireless Web ) to provide global sharing in a large scale through a common connected network from different devices also called ubiquitous (global) computing which means the user can simultaneously be connected to several wireless access technologies (2G, 3G, 4G or 5G mobile networks, Wi-Fi, PAN ) and seamlessly move between them and result in multiple concurrent data transfer. It’s not just about super-fast internet connection and capacity, it’s about not compromising the needs of thousands of people. Suppose if you are in a Stadium along with other 30,000 people who are accessing mobile data to upload a photo in a social media then by 4G it’s a great deal but 5G uses innovative technology that can allocate each users with their own antennas. So, that makes 5G almost 10 times faster than 4G.

Additionally, it focuses on (Voice over IP) VOIP enabled devices that user will experience a high level of call volume and data transmission. It will give you the concept of intelligent Internet phone where the mobile can prefer the finest connections. It will support IPv6 where a visiting care-of mobile IP address is assigned according to location and the connected network. 5G will also provide wireless connectivity for a wide range of new applications and use cases, including wearable devices, smart homes, traffic safety/control, and critical infrastructure and industry applications, as well as for very-high-speed media delivery, documentation, supporting electronic transactions (e-Payments, e-transactions), and multimedia newspapers, also to watch T.V programs with the clarity as to that of an HD T.V. It is expected that the initial Internet philosophy of keeping the network as simple as possible, and giving more functionalities to the end nodes also using 5G with cloud computing and nanotechnology in real wireless world with no more limitation to access and zone issues.

Evolution beyond mobile internet from analog through to LTE (Long Term Evolution), each generation of mobile technology has been motivated by the need to meet a requirement identified between that technology and its predecessor, the next generation must be about the advancements in all existing mobile technology so smart that it can ultimately result in artificial intelligence. By 2020, 3D movies and games, real-time streaming of ultra HD 2K content and remote medical service will be common on a smartphone resulting in need of 5G networks.

A Survey of 5G Network Architecture and Emerging Technologies