National Conference on ICT Integrated Pedagogy for Effective and Meaningful Learning Held in Nepal



National Conference on ICT Integrated Pedagogy for Effective and Meaningful Learning was held today at Lumbini Banquet, New Baneshwor, Kathmandu. Event was organized by UNESCO Resource Distribution and Training Centre (RTDC) and Kathmandu University School of Education (HUSoED) In Collaboration With Ministry of Education(MoE) Open and Distance Education Centre (ODEC) Tribhuvan University.

The Conference started with welcome remarks from rof. Dr. Mahesh Nath Parajuli, Dean KUSOED. Dr. Hari Lamsal, Joint Secretary, Ministry of Education(MoE) gave his Keynote Speech on ICT in Education: Policy, Implementation and Challenges. Like wise Dr. Bal Chandra Luitel, Associate Dean, KUSOED gave Keynote Speech on ICT Integrated Pedagogy: Principles and Practices. Mr. Sagun Dhungana, Technology Integration Coach, Google Certified Educator gave a presentation on Educational Apps in Pedagogical workflow.

The one day Conference continued with parallel sessions like Oral Presentations and workshops on various ICT related topics. Many researchers presented their case studies and proposals on ideas of improving ICT standards.

Eight valuable publications on ICT Policies and Standards were presented today on the National Conference o nICT Integrated Pedagogy for Effective and Meaningful Learning.

Additional Resources on the research papers:

Please email us and let us know if you are hosting any IT related event. We would love to cover your event in our blog.



Is Artificial Intelligence Free?

I saw someone asked a question  How-can-AI-be-free? on Quora. Having keen interest in this field I tried to think of all possible reasons inside my mind. The question was not precise whether s/he wanted to know about whether the tools that we use for an AI project is free, or he was talking about languages, platforms and frameworks. But the answer was quite straightforward, AI is not quoted as a patent of any tech giant, neither it is a private hardware which only an individual can have access to. Artificial Intelligence is a subject/ field of study and knowledge is always free, all it requires is your mind to capture as much as you can. So I answered my logic or simply what I think about it. But together with this question another question which is my Blog Title to "Is AI Free" also seems to be valid. And here is what I answered:

 
Artificial Intelligence, this exciting research field in Computer Science itself is in it’s initial phase. And hundreds of tech giants, institutions, universities researchers are doing independent researches. And within last 8–10 years such research bodies have published Tools, Languages, Datasets, Frameworks and Libraries as well as community support platforms. It’s the knowledge about AI and necessary skills for performing the executions required prior to the concern about whether or not AI is free.

Just assume you’ve knowledge about AI and enough skills to program your robot/IoT whatsoever you want to, there won’t be any hindrance for you to implement your algorithms for a AI Project.

  • There are huge datasets available from ImageNet to process, there are abundant online courses for free on Coursera.
  • There are libraries in python liky SciPy , NumPy and Pybrain for the statistical analysis of data you use in your Machine Learning Projects.
  • There are now foundations like Open AI and AI4ALL which aims to make this huge knowledge base of Artificial Intelligence to the open community.

And the lists about where you can get the resources for AI for free is abundantly available online. All you need to do is explore :)

Above the Clouds: A Berkeley View of Cloud Computing : A SUMMARY


DISCUSSION

Conventionally cloud can be referred as Service over Internet and hardware, but now the term cloud computing is referred for Software as a service. Cloud is now the demand for all tech giants due to some of the benefits like: decrease in cost of electricity, network bandwidth, operations, hardware availability at very large economic scale. We can make a lot of money, leverage existing investment, define a franchise, become a platform and attack a business incubation, by adapting the cloud computing. Cloud services were not user earlier, it is due to the reliability scale which was not prominent. But now cloud efficiently provides new tech trends and business models included as cloud platform. Cloud these days provide new application opportunities, mobile interactive application and parallel batch processing , which has been the reason for catching the huge mass of cloud service consumers. 

FINDINGS

Cloud computing strongly follows the ‘pay-as-you-go’ principle, which enables the feature elasticity providing its vendors the usage based pricing. Cloud vendors provide Virtualized resources as computation models or Virtual Machine, Storage model and Networking model. Cloud is more economic to rely on then other existing alternatives. Cloud computing application is focused on three models: Model of Computation, Storage and Communication. Software Licensing as an alternative cloud provides Pay-for-use Licensing. Conventional alternatives has limited storage but cloud provides invent storage Data-Lock In was the main issue in conventional alternatives whereas cloud provides many standardize APIs for maintaining and manipulating our data. Earlier there were no availability of services to host or run our system, but now cloud providers provide business continuity. Even cloud uses Elasticity to defend against the DDoS attacks. Cloud has improved virtual machine support: Flash memory gang scheduling Virtual Machines. Earlier bugs in Large-Scale Distributed Systems were a major issue, bug in a single hub used to damage the whole system but cloud provides invent debugger that relies on distributed virtual machines which solves the bugs without hampering the system. 

ARGUMENTS

From the cloud provider’s view, the construction of very large data centers at low cost sites using commodity computing, storage and networking uncovers the possibility of selling those resources on a pay-as-you-go model below the costs of many medium sized data centers, while making a profit by statistically multiplexing among a large software startup to build its own data centre as it would for a hardware startup to build its own fabrication line. Cloud specifically focuses on following concerns:
  • Application Software of the future will likely have a piece that runs on clients and a piece that runs in the cloud. The cloud piece needs to both scale down rapidly as well as scale up, which is a new requirement for software systems.
  • Infrastructure Software of the future needs to be cognizant that it is no longer running on bare metal but on virtual machines.
  • Moreover, it needs to have billing system built in from the beginning, as it is very difficult to retrofit an accounting system. 
  • Hardware Systems of the future needs to be designed at the scale of a container (at least a dozen racks ) rather than single rack.

Reference:
Note: This Article is an exact copy from my Summary Paper which I wrote on Researchgate, a year ago. It was a part of Assignment of my Elective Course Cloud Computing. I thought sharing is good.

Graphene Super Capacitor : Future of dash charging


Today’s technology has push the mobile devices far beyond than its limit. We can possibly imagine how smart phones has made our life comfortable and straightforward. With the ratio of developing smart phones their batteries hasn’t been able to be in the point where they should be. Of course there are dash charging but still they are time consuming and not completely developed as they need to be. And still we are adopting the traditional way of charging, using lithium ion and lithium polymer batteries. 


The concept of Graphene Super Capacitor might be the history changer for tomorrow’s world of fast charging. A battery, high energy storage charges slowly and discharges slowly this ability is energy density. Similarly, capacitor , a low energy storage charges rapidly and discharges rapidly which is the power density. Combining the best of both will lead to fast charging and slow discharging. Super capacitors lies between the concept of this two energy storage methods. 

Eesha Khare, a graduate from Lynbrook High School California developed a super capacitor prototype that charges very rapidly and would last longer for more charging cycles. And was runner up at Intel International Science and engineering Fair (2013) bringing out this prototype. Its still prototype in the current state but proved to be work.

Her prototype could be fully charged within time interval of 20-30 seconds and hold the charge for longer like on other similar devices. It lasts for 10,000 cycles compared to batteries in current world 1000 cycles. Much greater achievement in field of energy storage.

Nano scale and mass scale fabrication hasn’t been practically possible on today’s world. Gradually, scaling up this prototype will lead to powering phones and electric cars. Like increasing the technology the maH of batteries will also certainly increase in the future. 

Imagine plugging your phone in, charging it for about 20 second and of that charge it will work one day or more.


Reference:
Eesha_Khare





COZMO: HUMAN ENHANCEMENT TOWARDS AI ROBOTICS



Human brain is a box full of mystery. You never know what new idea may explode at your course of time . And what if that your idea turned out to be so intelligent that can often act and react like human. I am quiet sure reading these line one thing struck on your mind “robot”. Robot is a machine built to carry out some complex task or group of tasks which are programmed. It is one of that example of hard work and dedication of computer scientist since that era of time. And the concept of AI has brought their dream come true. Artificial intelligence has been arguably the most exciting field for robotics. Though its certainly the most controversial that a robot can work in an assembly line . But there's no consensus on whether a robot can ever be that intelligent as that of human brain. Indeed its an challenge for AI world that a man made machine with intellectual abilities as like ours. 

Today I am going to share about a creepy robot that’s fascinating today's world “COZMO”. Cozmo is an artificially intelligent toy truck which also shows the scope and future of AI and robotics. It is an artificially intelligent toy robot unveiled by San Francisco startup Anki- and Tappeiner. Cozmo is software driven that can be connected from device to device through Bluetooth. There’s no data in cloud so Cozmo is secure. It comes with three interactive power cubes, which the robot can recognize and with that you can play game such as quick tap, keep away or just free play. Cozmo is born to be playful; it is charming, mischievous, and unpredictable. It recognizes and remembers you, interacts with you, plays games and gets to know you over time. It combines the best of animation – humor, personality, and emotional connection with robots. 

Powered by advanced robotics, AI, and computer vision it has a brain that processes millions of data per second . It uses technologies such as computer vision, animatronics, motors and AI software. It has three ARM - based microprocessors running on android and iOS. 

Cozmo’s eyes changes color from blue to green when its mood changes. It can yawn, sneeze violently or shiver. The final design is made from 340 parts and it has four motors, a camera and a OLED display. When Cozmo is put on a charger, it goes to sleep and snores.

In this challenging world for AI this robot which can wake up, blink its eye, crawls has given life to the concept of John McCarthy. 

Refrence: venturebeat
Quantum Computing: Next Generation's Biggest Invention

Quantum Computing: Next Generation's Biggest Invention

The theoretical studies of computation systems which makes the direct use of quantum – mechanical phenomena, such as superposition and entanglement to perform operations on data is called quantum computing. Quantum computers are different from binary digital electronic computers. Binary digital electronic computer uses binary digit 0 and 1 where as quantum computation uses quantum bits which can also be superposition of both 0 and 1. The field of quantum computing was initiated by the work of Paul Benioff and Yuri Manin in 1980. By the change of time many others took a initiative to develop it. 

What is a qubit?

 A qubit just like a classical bit has two possible states | 0 > and (denoted in dirac notation ). But unlike classical bit a qubit can be in more than two possible states. That is, it can be in a superposition of states and . Mathematically, a superposition state looks like,

In the above expression is the probability amplitude for state and β is the probability amplitude for state . and both can be complex numbers. Since and are probabilities amplitudes, they must be normalized. Mathematically,

Qubits can be represented in terms of polarization of a photon where vertical and horizontal polarization are the two states. It can also be represented in terms of spin of an electron. There are number of quantum computing models distinguished by the basic elements in which the computation is decomposed.

Quantum Computing Models

Adiabatic quantum computer based on quantum annealing
Here, the computation gets decomposed into a slow continuous transformation of an initial Hamiltonian (is a operator corresponding to the total energy of the system in quantum mechanics) into a final Hamiltonian whose ground state contain the solution.

One way quantum computer
The computation decomposed into sequence of one quantum bit measurements applied to a highly entangled initial state or cluster state.

Topological quantum computers
The computation decomposed into the brading of anyons in a 2D lattice.

Quantum gate array: The computation decomposed into sequence of few quantum bit quantum gates.
A quantum bit can hold even more information, e.g. up to two bits using superdense coding. Hence, quantum computers will be able to analyze the vast amount of data collected by telescope and seek out earth like planet. The concept of quantum computational models will help to determine how diseases developed. Google itself is using a quantum computer to design software that can distinguish cars from landmarks. 

A quantum may be the smallest measurable finite unit but quantum computing may unlock doors to infinity. It is expected to be the next biggest invention in computing despite the fact that most experts concede that the first quantum computer may be some years off.

Source:
Wikipedia: Quantum computing
Research Blog: What is a Quantum Bit?

Artificial Intelligence in Medicine : Enhancing World of Medicine


Our fathers, forefathers always wanted us to become weather a doctor or an engineer or a scientist or professors. But how can we children of today's generation live within this boundary??? Obviously we want more than this isn’t it? How can we just get satisfied being a scientist or a doctor being precised only in a particular field, rather we want innovate more for today's world. For that huge innovations wont the idea of collaboration be the best? Yes, collaboration has been a sensational example for today's world. In this blog I am going to share one of the tremendous collaboration example.

AIM(Artificial Intelligence in Medicine) is an example of such collaboration between computer scientist and health care professionals. Medical Artificial Intelligence is primarily concerned with the construction of AI programs that perform diagnosis and make therapy recommendations. From the very earliest moment in modern history scientist and doctors alike were captivated by the potential  of such a technology might have in medicine. With intelligent computers able to store and process vast stores of knowledge, the hope was that they would become perfect “doctor in box”, assisting or surpassing clinicians with various challenges in medical science.

AIM system is expected to occupy as many branch as it can in medical treatment. Starting with generating alerts and reminders i.e. an expert system attached to a monitor that can warn of changes in patients condition in less acute circumstances it might scan laboratory test results or drug orders and send reminders or warnings through an e-mail system.

Diagnostic assistance is the next and most important expectation from AIM. When a patient case is complex, rare or the person making the diagnosis is simply inexperienced an expert system i.e AI can help come up with likely diagnosis based on patient data. In the first decade of AIM most research systems are developed to assist clinicians in the process of diagnosis, typically with the intention that it would be used during a clinical encounter with a patient. But it is clear that some of the psychological basis for developing this type of support is now considered to be less compelling given that situation assessment seems to be a bigger issue than diagnostic formulation.

Dxplain is one of the example of one of these clinical decision support system, developed at the Massachusetts General Hospital. It is used to assist in the process of diagnosis, taking set of clinical findings including signs, symptoms, laboratory data and then produces a ranked list of diagnosis. It provides justification for each differential diagnosis and suggests further investigation. This system contain a data base of crude probabilities for over 4500 clinical manifestations that are associated with over 2000 different diseases.

Artificial Intelligence in Medicine (AIM) is enhancing in which interactions with the outside world are not only natural but mandatory. Although the basic research topics in AIM maybe those of artificial intelligence the applied issue touch more more generally on the board field of medical informatics.

TORRENT: Leech, Seed, Peer, Everything To Know About


In simple words, You are a Leecher/Peer until your download completes. You become a Seeder once you have 100% download.

Mostly, all of us are well known what torrent is and where we use it? But, What exactly is torrent? How does BitTorrent work?Why torrent? Is torrent legal? How are large files hosted on the Internet easily?

An American Computer programmer Bram Cohen came up with the concept of sharing a file over the Internet with different hosts. BitTorrent was first released by Bram Cohen back in 2001, but it took two years before the new file-sharing protocol gained a notable audience. In the years that followed millions of torrent files were downloaded and shared billions of times. Bram Cohen designed a peer to peer communication protocol for distributing data over the internet termed as BitTorrent protocol. Peer-to-peer file sharing is different from traditional file downloading. In peer-to-peer sharing, you use a software program (rather than your Web browser) to locate computers that have the file you want.Because these are ordinary computers like yours, as opposed to servers, they are called peers.The BitTorrent protocol divides a large files into small chunks allowing users to download sections of it and to exchange the sections between themselves until the file download completes. This protocol uses less bandwidth from the file's creator. This is a great advantage for its distribution in the long term.

How does BitTorrent work?

Unlike some other peer-to-peer downloading methods, BitTorrent is a protocol that offloads some of the file tracking work to a central server (called a tracker). Another difference is that it uses a principal called tit-for-tat. This means that in order to receive files, you have to give them. With BitTorrent, the more files you share with others, the faster your downloads are. Finally, to make better use of available Internet bandwidth (the pipeline for data transmission), BitTorrent downloads different pieces of the file you want simultaneously from multiple computers.
Its a fairly simple concept, When you download a torrent, you aren't downloading the file from one specific person, but rather from many different sources who share the file, for example; lets say I am downloading a movie from a torrent thats around 300 megabytes, at 200 kb/s speed from 10 different sources, that would mean that the average transfer rate per person is 20 kb/s, so from each person you are downloading the same file (but different pieces of it) at an average 20kb/s.

The thing is that some people have slow Internet, so you made download faster from others, lets say you have 10 different sources but 5 of them are on dial up, and you only receive a max of 5 kb/s, that means the other 175 kb/s would come from the other 5 Internet users who may be using broadband or higher.

Its the same concept for seeding, once you download the file, and you allow to seed it, a lot of people are connect to your computer since you are hosting the file, not just one person, lets say you are seeding a file for 30 people, but your upload speed is at 150kb/s each of them are receiving an average of 5 kb/s upload speed for each person.

Your friends Internet connection impacts the speed of the download, and also He may be getting unlucky and downloading from sources that have slow Internet speeds, (it happens)

Is torrent legal?

As long as the item is copyrighted and you don’t own it and if someone else is the owner then downloading it for free is not legal. But via torrent it is legal. The protocol itself is perfectly legal. Torrent may be primarily used for privacy at the present mainly because of its decentralized nature. However there are many legal uses of BitTorrent, like many Linux distros prefer torrent to push out updates as it reduces the stress on their servers. 


What is seeding?

Seeding is where you leave your BitTorrent client open after you've finished your download to help distribute it (you distribute the file while downloading, but it's even more helpful if you continue to distribute the full file even after you have finished downloading). Chances are that most of the data you got was from seeds, so help give back to the community! It doesn't require much - the client will continue seeding until the torrent is removed (right click the torrent, then hit Remove). Proper practice is to seed until the ratio of upload:download is at least 1.00.

What are seeds, peers and leeches in Torrents' language?

SEEDERS are those who has downloaded the file already or initially only one person who uploads the torrent seeds to others. You may notice that after your download is complete the torrent turns from DOWNLOADING to SEEDING. Seeder is someone from whom you can download a piece of file. Hence they affect the overall availability of file on P2P (Point to Point) network.

PEERS are those who are downloading and uploading at the same time. They do not posses the whole file. They only posses parts of whole. Peer is someone who is involved in file sharing activity. It is a generic term.

LEECHERS are those who don’t have all parts file and are not able to share you the required part of the file. If there are zero seeders it is doubtful that you will ever finish downloading that torrent. Very rarely you can download the whole only by leechers. Leecher is someone who has downloaded a file but is not sharing it back to P2P (Point to Point) network. Hence, overall availablity of file decreases.

In simple words, You are a Leecher/Peer until your download completes. You become a Seeder once you have 100% download.

What is their inside Bit-torrent file?

The address to one or more trackers and information about the files. The tracker is a server that knows which users have the real file.
The basic principle is:
  • Your BitTorrent program, that opens the BitTorrent file, connects to the tracker(s) and gets a list of people who have the file. 
  • Your BitTorrent program connects to the people and request pieces of the file. 
  • You are now also on the list, so any user opening the BitTorrent File after you will get your address as well and can download the pieces from you that you already have downloaded. 

How the first seeding starts in torrent? 

  • You create a torrent using any torrent client 
  • add trackers (to manage a list of all the swarms and peers) 
  • distribute the .torrent file 
  • users (torrent clients) read the .torrent file and then obtain a list of peers and seeders from the trackers by querying the unique hash of that torrent 
  • Before the connections are made to those peers, various information like: total pieces, piece size, names, hierarchy of files etc are saved from the .torrent file 
  • connection setup and downloading starts 

What is the first seeded torrent file?

The Oldest Torrent

The torrent file that has been around for the longest time according to our knowledge is The Matrix ASCII. We already crowned this one the oldest torrent back in 2005, and as of today(Nov 7, 2010) it is still active with a few downloaders and only one seeder.
The torrent file in question was created in December 2003 when sites like isoHunt, The Pirate Bay and Torrentz.com were only a few months old and when Facebook and YouTube didn’t yet exist. Thus far, this torrent has survived a mind boggling 2500 days.

What is the largest torrent file?

When we refer to the largest torrent we mean the single .torrent file that downloads the most data, not the size of the .torrent file itself. There are several huge torrent files active at the moment, but the record goes to a torrent with a 746.70 GB collection of all 2010 World Cup soccer matches (~ 6GB per half).

Refrences:


Assistive Domotics: Logical Design of Automated Door in a Smart Home

Background

Assistive technology is the form of home automation which includes assistive, adaptive, and rehabilitative devices for people with disabilities and also includes the process used in selecting, locating, and using them. Microsoft Corporation’s C.E.O. Steve Ballmer once said ”The number one benefit of technology is that it empowers people to do what they want to do.It lets people be creative. It lets people be productive.” The US Census Bureau has projected that by 2010 13% of the population will be 65 or older (Cheek 2005). The bureau has also projected that by 2030 there will be 9 million Americans older than 85. Providing the physical facilities of health and security personnel will be a tough task because the cost will be massive.So automated machines will be the best cost effective alternative.


History of Assistive Domotics

Today every embedded device are automated to some extent in order to ease the users. Nikola Tesla’s design of first remote-controlled boat(1785 A.D) is the first known automated device in history.Mentioning about home automation in the early 1930s World’s Fair models fictional were exhibited in order to excite the spectators. Then the invention of Complex Number Calculator (CNC) in 1940, mouse in 1964, Mac OS in 1984, first wireless system in 1989 are other different development in automation.In 1984, home automation technology spreads to garage doors, security systems, infra-red control , fiber optics and many more. Likewise a separate section named as assistive domotics was started and it emphasized the development of automated appliances for the elderly and disabled people.


Machines for Assistive Automation

A Home Robot

A home robot is a mobile device for moving about, performing tasks such as vacuuming, measuring, communicating, fetching objects etc. This device is useful for the elderly people who have problem with ageing and back pain. Many of their daily activities will be solved by this home robot.



Assistive Bed

Assistive bed is an externally monitored machine special designed for elderly and disabled people with spinal cord disabilities or paralysis.This device functions when the user rests in the bed, the string like structure will expand and contract calculating the mass weighed by it.

Designs in Smart Home

The development of home automation is in its early stage.Many researches in the filed of automation is going on.There are obstacles in building a completely automated system. None of the existing system in the world are completely automated by itself. The main reason behind it is machine needs external factors like human, electricity to begin its function.

Design of Automated Door

Scenario
A security code must be entered in order to open the door and enter inside the house. Same scenario is represented below with the help of context diagram and transition diagram.

State Definition
Here automated door is deterministic in nature and the system consists
of three states: 
Q = MainDoor,PasswordCheck,MainHall,
two input symbols authorized as ’1’ and unauthorized as ’0’ i.e. ∑= 0,1
starting state q0 is MainDoor i.e. q0 = MainDoor
and final state F is MainHall i.e.  FMainHall
A transition table of illustrating the states involved while entering to main
hall is represented below.

Context Diagram
Below is the context free diagram which clearly shows the work mechanism of an automated door:

Transition Table
Transition Table of automated door as per the context diagram is shown below:






Transition Diagram
Transition diagram of the automated door is shown below:










Conclusion
This research was conducted as a partial fulfillment of course "Automata and Formal Languages". The topic was chosen seeing the potential of automation in houses and business complex. Now with this logical design we can simulate the result using functional/procedural programming languages like LISP and PROLOG for convenience. I've also designes another logical implementation of the movement of wheel chair within a smart house. This can be seen in the reference paper.

Reference:

Assistive Domotics: Logical Design of Automated Home and Movement of Wheelchair in a Smart Home, Sanjog Sigdel,Published on: Research Gate,  [accessed Jan 29, 2017].