Abstract 65 million years ago, 77% of life on earth faced Armageddon in one of history's greatest mass extinctions due to a 15km asteroid strike. Dinosaurs never had space agencies, but humans do. Scientists were able to develop mechanisms for asteroid defense that can aid us when there is a fear of strikes in the shape of asteroids, which is what mankind will have to face by 2029 with Asteroid 99942 Apophis.
I. Introduction
Approximately 65 million years ago, an asteroid roughly 10 km in diameter hit the Earth with
such a gigantic impact that it killed 77% of the species that were living on the planet then
II. What is an Asteroid?
Before we unravel the threats caused by Apophis, some names have to be first clarified to avoid
any confusion. Most people often mix up Asteroids, Meteors, Meteorites, Meteoroids, and Comets all
together assuming they are just flying space rocks. But the fact is that they're different from each
other in several ways and characteristics.
III. How often do NEOs hit the earth?
Smaller-sized NEOs typically hit the earth every day. At this particular moment, there is actual space debris entering the earth varying in size from grains of dust to small rocky objects that may or may not burn in the atmosphere, which can be classified as PHAs. But how about the bigger-sized ones, maybe ones the size of the one that caused the Jurassic mass extinction? Due to the existence of the earth's best buddy, the moon, astronomers can study the history of meteor impacts and asteroids; that is due to the moon's surface which is filled with meteorite craters. And due to the absence of an atmosphere strong enough to cause weathering on the moon, these craters remain untouched from the moment the moon was formed. According to studies of craters around planet earth and on the moon, astronomers learned that Asteroids the size of the one that killed the dinosaurs usually tend to hit earth once every 100 million years on average. The ones the size of Tunguska impact tend to hit on average once every 200 years. And small asteroids like the one in the Russian impact of 2013 usually happen once every decade. But the question remains, what if an asteroid the size of Apophis -500 meters in diameter-approaches earth as a PHA, what can astronomers do in this situation?
IV. Asteroid Defense Mechanism and DART
A profound question: what if dinosaurs worked in space agencies that developed strategies to protect themselves from the asteroid mass extinction? Could they have saved themselves from ceasing to exist? As this is a profound question, it can't be answered. But what is known is that dinosaurs didn't even know they were going to meet their doom except when they saw a streak of light growing exponentially until it landed on earth and took out 77% of earth's species. So what if they had enough warning time, could they have used their natural instinct to run away from the asteroid? Since humans aren't dinosaurs, humans do have a lot of working space agencies filled with brilliant minds that protect us from dangers similar to asteroid strikes, as well as tell us about the universe we live in. Scientists throughout the years have developed an Asteroid Defense Mechanism formed of five steps to protect mankind from such massive disasters.
V. Where does Apophis stand here?
Calculations done by JPL (Jet Propulsion Lab in Caltech) show that there would be a 2.7%
possibility of an impact in 2068, but it was quickly ruled out after doing more calculations.
VI. Conclusion
Awareness needs to be raised regarding Asteroid Defense Mechanisms and how scientists can contribute to providing a safe life for mankind. Also, more funds need to be given to space agencies to develop their strategies of Asteroid Defense, if fear strikes at any time, space agencies coming together will be our only hope.
VII. References
Abstract Imagine being able to flick a screenshot of what someone is looking at. It sounds like photographic memory, a superhuman skill. The concept of storing everything someone has ever seen, filing it away like a file in a cabinet, and remembering it with all the needed details. The ability to perceive an object shortly after looking away is known as eidetic memory. The vision lasts for a few seconds or even less than one second for most people. Accordingly, in 1964, a study was made to conduct the relationship between eidetic memory and age. In their experiments, children were able to obtain the best results when asked to describe the components of a certain image after seeing it for 30 seconds. However, some people still have the ability to memorize books, images, and all types of texts in a feature known as photographic memory.
I. Background
Eidetic memory refers to a person's ability to recall a huge number of pictures, sounds, and
objects in an apparently infinite volume
II. Mechanism
The brain is widely considered to be the organ in charge of the body's whole range of
activities. The posterior parietal cortex is where eidetic memory takes place. The parietal cortex
is responsible for integrating data from many senses to create a cohesive picture of the
environment. It combines information from both the visual and dorsal pathways. This skill helps us
to respond to items in the environment by coordinating our motions. The parietal lobe is also in
charge of human bodily sensations including touch, temperature, pressure, and pain. As a result,
this area of
the brain is extremely critical to the human body. Furthermore, this area of the brain is
responsible for a variety of memory functions, including eidetic memory
III. Conclusion
Finally, despite the disagreement among scientists over the existence of photographic memory and eidetic memory, all of the mentioned cases and information are nothing more than proof that humans can have the ability to retain their knowledge in the form of visuals. According to research, exercising four hours after any event might help people remember things better. Finally, although memories are fairly real, although maybe not living, trust in them makes them more tangible. As Said by Steven Wright: "everyone can have a photographic memory, some just do not have the film".
IV. References
Abstract Neuroscience is one of the greatest supporters in the understanding of human physiology and what actually makes us who we are. How do we learn? Why do some people learn things more quickly and easily than others? We will go on a journey inside the most complex organ in the universe learning more about the science of Neuroplasticity while we try answering these questions. After reading this article, you will leave with a new recognition of how majestic your brain is. Your plastic brain is regularly and continually being formed by the world around you. That change can be for the better and even for the worse. Understanding that everything you do, encounter, and experience can change your brain is a crucial thing. Moreover, learning is not that easy. It involves a physical structure change to your brain and in order to do that you have to practice, exercise, and struggle more to achieve your dream.
I. Introduction
Everything we understand about the brain is developing at a stunning and astonishing pace; what
we thought we understood about the brain turns out to be inaccurate and incomplete. For example, we
used to believe that the mind couldn't change after childhood, which is a misconception. Another
misconception is that the brain is silent when we are at rest; however, it turned out that even when
we are thinking of nothing, our brain is extremely active. Later, many progressions in technology
such as MRI allowed us to make many important discoveries. Perhaps the most exciting
and intriguing one is thatthe brain is changing every time we learn a new fact or skill. Our brain
is actually changing with every single behavior we do. It is what we call the science of
Neuroplasticity.
The brain could change in two different ways to sustain learning. The first one is in a chemical
way such that the brain works by transferring chemical signals between neurons and this triggers a
sequence of actions and reactions, but this only supports shortterm improvement. The second is by
modifying its physical structure during learning since the brain can actually alter the connections
between neurons changing the arrangement and composition of the brain and it takes more time. This
is related to longterm improvement. Structural changes also can form integrated
networks and regions that function together for learning and create specific regions essential for
particular behaviors, changing the structure of the brain in the process
II. The Brain
Did you know obviously that your brain is needed for everything we do - how we think, feel, and
act. The brain is the most complex organ; there is nothing as complex as the human brain. It is
reckoned that the brain ought one hundred billion nerve cells, and every two cells are not connected
to each other in a one-to-one connection but up to ten thousand individual connections. So, fun
fact: you have more connections in your skull than there are stars in the universe. Moreover, even
though your brain is 2% of your body weight, it uses 20-30% of the calories that you
consume. So, about the breakfast you had this morning, almost the third of it goes to feed this 2%
of your body's weight
III. An Intriguing Case Study
IV. You have to do the work!
The main operator of the modification in the brain is nothing but our behavior. So, there is
nothing called a drug for neuroplasticity. Nothing is more efficient than practice and exercise. The
bottom line is: You have to do the work. In fact, research has shown that increasing difficulty and
struggle during practice at learning actually leads to more prominent development in the brain. The
problem here is that neuroplasticity can work either the positive when you learn something new and
skill or the negative way when you forget something or get addicted to drugs. The
brain is remarkably plastic; it is shaped not only by what you do but also by what you don't do
V. Conclusion
Behaviors that we employ every day are critical; they are changing our brains. So, when you have just finished reading this article now, your brain will not be the same as when you started reading. The amazing part is that every reader will change his brain differently. Understanding these differences and variabilities will enable the next great advance in the field of neuroscience. Study what you learn best. Repeat these behaviors that keep your brain healthy, and break those that make it unhealthy. Practice! Learning is all about doing the work that your brain requires. The best strategies are going to vary between people. Actually, it is going to vary within a single individual. For instance, one can learn to play the piano fast but struggle to play football. So, when you leave today, go out and build the brain you want!
VI. References
Abstract Approaching the end of the last decade, artificial technology has been unutilized in fields such as healthcare, E-commerce, agriculture, etc. For the mere purpose of diminishing the inaccuracies found in human-adapted practices, Ai technology is aptly more sufficient; produces fewer errors and can be reformed and applied in numerous fields. The environmental companion—a product of AI technology—renovates the foundation of recycling. Now with access to a recycling medium, people can view their carbon footprints, monitor their plastic waste, and recycle their materials with gained virtual points.
I. Introduction
Plastic is drawn from two primary resources: ordinary littering and materials disposal, which eventually pile up in water bodies. For instance, the Danube River—the second largest river in Europe that flows into the mouth of the black sea, transports plastic materials reckoned at 4.2 metric tons into its drainage base. The interference of physical factors such as the wind and current flow determines the buoyance and movement of these plastic items within a water body. For example, windage contributes to measuring the force required to transporting these items.
II. AI application in mitigatingmismanagement
Recycling, unlike other disposal methods, is inhibited by factors such as commodity,
availability, and collective efforts of the community.
III. The philosophy behind the application
The environmental companion is an application wherein people access a credible medium through which they can recycle, be of zero plastic waste thereof. The application consists of three different sections, each articulated to a function, however, the sections abide by a certain feedback mechanism; recollected in the creation flowchart of this application, it was found out that users engaged seldomly when the application only consisted mostly of either texts or illustrative graphics. The primary (Environment-or I) section is the profile: where the data about the user is to be included, such as names, email addresses, and a unique code for this user. According to the app market, citing patterns of success, the sense of personalization renders more engagement with the overall process. Furthermore, the consistency of the sections’ pronunciations and the purpose of said sections draw the user to be more decisive when using the application. The primary section includes a display of the virtual points (environs) added at the end of every successful recycling procedure, these virtual points can be traded with biodegradable materials with partnering agencies such as the recycling local committee. The second is the data recorder. In this part, the consumers will be able to record their plastic usage data through AI recognizers that will recognize the serial code embedded on the material, the serial code details dimensions such as weight, volume, and density of the material. The third section is the activity curve, which showcases the immensity of plastic used and recycling occurrence in respect totime. According to the science of feedback, the user will mentally tie every rise and fall within the activity curve according to the virtual points they gain. Thus, this gamification system will trigger more engagement, more recycling thereof.
IV. Proceeding to the final step
Upon fulfillment of the recycling task, the users will be required to toss their materials into trash bins. These bins are exclusively designed with sensors of weight and volume. These sensors are not only compromised of a system that will recognize the dimensions but will also communicate data back and forth from the bin to the administrative application. These trash bins are designed for the sole purpose of collecting the to-be recycling materials andensuring credibility between the administrators and users as to when handing out virtual points to the users. When the user initiates a recycling process—by having the serial code recognized on the application, they will receive anemail of the foremost nearby trash bin in their proximity. Once the materials are deposited in the bin and the dimensions data are congruent: the recycling process will be affirmed, and the user will be notified of a rise in their virtual points.
V. Conclusion
The properties that give rise to plastic affordability, availability, and excessive use in the
market, also give rise to the foremost pollutant in water bodies
VI. References
Abstract
Since the rise of technology, many of the new folds of the new generations have been attracted to
programming and computer science as their majors. This gave rise to large numbers of programmers all
over the world with different skillsets and programming experiences. Open-source software (OSS) comes
as a natural result of the wide variety of ideas that come to the minds of programmers and the
collaborative nature of the field that emphasizes the creation of bigger software from smaller pieces
of free and simple pieces. Open-source software has come a long way since the first
piece of major OSS (the Linux kernel and its development
I. Introduction
Since the inception of OSS (open-source software), it has been a major source of revenue from operations in many places from small startups to international multimillion-dollar conglomerates and has become a popular stable in IT departments and any successful IT toolkit. With the Internet's arrival in the early 1990s, programmers from various locations around the world have been able to work together and distribute, share, and collaborate on software easily. Along with the various advantages of OSS, the vastly improved ability for developers to communicate has increased the outreach of OSS. The previously mentioned popularity of OSS gave rise to many important pieces of OSS that became a major part of modern-day software such as the Linus kernel and the GNU compiler collection -the C++ compiler in which is one of the most used-, Python, VLC, Mozilla’s Firefox, and Blender. In this article, the role and position of OSS in education are discussed. First, a history of OSS is given with the story of the first piece of OSS. Second, a case study on using OSS in education is studied and reviewed. finally, then compared to the possible outcomes if the same experiment is repeated in the modern-day.
II. History and Background
Since its creation, the history of open-source software has been filled with many achievements, failures, and unique events that give it its rich, vibrant and, interesting background. The history of OSS starts with Richard Stallman and the decline of free software back in the 1960s. back in the early days of early consumer electronics, most pieces of software were distributed in the form of physical media which contained both the source code (the human-readable code) and the machine code to execute these programs, the reason why both versions were backed was that early software needed to be modified by the user to run on different machines and systems. With the rise of computer operating systems, software development costs started to increase significantly relative to the development costs of hardware at the time, the increase in costs led to a slew of antitrust and copyright lawsuits to go through in the technology field. After the dust had settled many if not most of the software released was released without its source code and was only released with the necessary machine code to run it. This trend started to worry programmer and future free software activist Richard Stallman. He worried that now without legal access* to source code further collaborative modifications of software would not be possible. In 1983 Richard Stallman started the GNU (Gnu’s Not Unix) project in hopes of countering the closed-source trend. He also started the Free Software Foundation and invented “copyleft”. Copyleft was made to be the opposite of copyright for software, in it the freedom of contribution, modification, and redistribution of software was protected. He implemented copyleft into GNU’s General Public License which required any derivative work on software to also follow the same license, preventing any modifications of software to be turned into closed-source software. The term “open-source” was first coined by Stallman and another prominent programmer who had the same set of beliefs (such as Linus Torvalds) in hopes of appealing to bigger companions. The Open-Source Initiative was founded to further spread the usage of the term. The coining of the term came shortly after Netscape (a prominent internet company at the time) released the source code for their web browser in hopes of programmers helping them grow it further. When talking about the history of OSS, one cannot Linus Torvalds, the original creator of Linux (a Unix-like system) who added his creation to the OSS pool, accelerating Stallman’s vision of fully open-source computer life. Linux is now one of the most popular operating systems, due to its open-source nature comes in tons of varieties that serve multiple purposes from normal desktop use to server operating systems to the main operating system for cybersecurity testing. **(compiled machine code of a lot of programming can still be converted back into source code using decompilers, but it is illegal under copyright laws that prevent reverse-engineering of software)
III. OSS in Education: Case Study by Swansea University
i. Prerequisites and background
In the case study presented in the paper “Open-Source Software in Computer Science and ITHigher Education: A Case Study”, which was published in 2011, students in Swansea University were given a completely open-source setup that included a full IT setup starting from the text editor for the students to the operating system of the server and the database software for the said server. The academic year included three main courses: Data Structures and Algorithms using Java, Rapid Java Application Development (an advanced Java class), and Design and Analysis of Algorithms. The presented goals for the classes themselves were for the teacher to have to devices (Desktop and laptop) that were linked together securely through a server which also served both static and dynamic pages to the students. The second goal is for the teacher to have a wide range of software to fully deliver a successful class, the software kit had to be fully open-source and had to include the basics such as a web browser, text editor, and an email client and also advanced software such as an IDE (integrated development environment) and a program to create presentations and diagrams for the students.
ii. Used Software
For the case study the software used was as follows:
iii. Results
In the original paper reviewing the case study, three aspects were observed when the results were collected.
Second came student appeal, the paper found that there were two main reasons why OSS might appeal to a university student. First, was that cost reduction did not just affect the professor and the school but also the student, the reason for which is because these same students might want to use the software they use in university at home or even in personal projects or small startups, with the cost reduction of OSS this significantly improves the promotion of entrepreneurship. Second, exposing computer science students to software that they can themselves change and modify according to their needs and wills is an immense boost to their hands-on experience in software development. Third came the ease of use, in this aspect specifically the upper hand went to proprietary software due to it being more mature and more widespread to use.
IV. Results with Modern Improvements in Mind
In this part of the article, the three previously mentioned result aspects of the original paper
are looked upon from a modern outlook.
First, the cost has not changed that much since the current plan for big technology companies to
compete with OSS is to either give high educational discounts (where OSS still wins with its free
nature) or to offer more stripped-down versions of their software lineups for free (ex. free Office
online vs paid Office 365) where OSS still gets the upper hand due to the developers not looking for
a profit through upfront cost and so adding the full feature set.
Second, the appeal for computer science has only gone up, this comes as a result of the
ever-increasing nature of the software development and OSS industries where the more experience a
student has with software development the higher the chance, they get employed post-graduation
V. Conclusion
In conclusion, Open-source Software has improved significantly since the early days of its adoption, this resulted in it improving a lot on its downsides while keeping its main advantage (freedom and freeness of use) safe and secure. It also has come a long way since the writing of the original paper as shown in the modern look section of the article. The improvements in the field of OSS and the increasing importance of participating in it only serve to increase the importance of integrating it into modern higher-level education to help students prepare for the competitive work market and future endeavors.
VI. References
Abstract With the finding of an Earth-like planet circling Proxima Centauri, the star closest to Earth and part of a three-star system, our small, lonely piece of space became a tiny bit less lonely in 2016. Proxima Centauri B is an earth-like planet with 1.5 times the mass of Earth. Alpha Centauri is orbiting its star in its habitable zone, where the ideal conditions for liquid water exist. One of the most fundamental ingredients in kicking off life is liquid water. When scientists examine exoplanets, they look for specific characteristics that indicate whether or not they might inhibit life.
I. Introduction
Scientists began actively searching for exoplanets in 1917, but it wasn't until 1992 that the
finding of numerous mass planets around pulsar PSR B1257+12 led to the first real detection of an
exoplanet. This was only the beginning; since 1992, there has been a significant rise in the
discovery of exoplanets, with over 4700 exoplanets discovered in 3490 star systems as of 2016.
Proxima Centauri B is one of the most recent discoveries.
II. Discovery and Observation
The Pale Red Dot campaign, coordinated by Guillem Anglada-Escudé of Queen Mary University of
London, was searching for a small back and forth wobble in the star induced by the gravitational
force of a possible orbiting planet. A minor gravitational influence of then labeled Proxima
Centauri B on its host star, called Proxima Centauri; a red dwarf star in its star system that is
overshadowed by its neighboring stars Alpha Centauri AB. Previous research has suggested the
presence of a planet orbiting Proxima. Every 11.2 days, something appeared to be occurring to the
star,
according to data. However, scientists couldn't say if the signal was created by an orbiting planet
or by another form of activity like stellar flares. The pale dot campaign was able to confirm the
existence of Proxima Centauri B. in 2016 by using Doppler spectroscopy, also known as the
radial-velocity method, which involves making a series of studies of the spectrum of light radiated
by a star;
III. Physical Characteristics
i. Physical Attributes
ii. Host Star
IV. Habitability
A great number of elements from various sources are proposed to affect the habitability of red
dwarf systems. The low stellar flux, high possibility of tidal locking, limited habitable zones, and
significant stellar variation that planets of red dwarf stars face are all obstacles to their
habitability.
V. Conclusion
We might not ever get our first interstellar greeting, but the fact that there is a habitable
planet right in our cosmic neighborhood. Proxima Centauri B is a sign that our efforts are not
meaningless. All the questions have not been answered; scientists think they can scan the planet’s
atmosphere for oxygen, methane,and water vapor utilizing ESPRESSO and SPHERE on the VLT.
VI. References
Abstract The human is honored among the other species by his brain. This brain took a lot of scientist’s attention for a lot of years passed and others still to come. New technologies helped scientists to learn more about the brain which led them to achieve a leap in neuroscience gathering a lot of information. Every day our brains are introduced to different problems that are required to be solved. It is not important what level of difficulty these problems are as your brain will respond the same. Solving problems take place through a series of four steps of encoding, planning, solving, and responding. In these stages, different far regions should collaborate to solve them. This collaboration is done by forming networks between these meant regions. So, the stronger the network between these regions results in more success in solving everyday problems. Math problems with their four domains differ from normal non-math problems according to the active sites in the brain.
I. Introduction
The human is distinguished by his brain. This small organ that is unique for humankind is a
mystery until now and this is because of its contributions in almost every action you do. This
mysterious organ is a mystery. Even when you are at rest, it is very active. After the introduction
of new technologies like magnetic picturing and others, it became possible to take pictures of the
brain monitoring it during different actions for a better understanding of what happens in it. And
this is what helped in the flourishing of neuroscience getting some information about the
brain. From this information is a mechanism for an action your brain can do almost every day which
is doing mathematics.
Doing mathematics is introduced to everyone’s brain regularly. These problems vary in their
difficulty from simple arithmetic operations to advanced problems. Scientists have wondered a lot
about what happen during solving these problems and can solving simple problems be done by the same
regions of the brain responsible for solving more advanced ones, and by exploiting new technologies
in neuroscience the brain was photographed to pass a four steps process reaching to a solution for
the problem.
II. Stages of solving problems
Furthermore, John R. Anderson and his team from this study found how different regions of the
brain work in four stages solving the problem. These stages according to this study are encoding,
planning, solving, and responding.
III. Your brain’s response to doing mathematics
IV. The difference between the various level of problems on brain
V. Conclusion
Mathematics and non-math problems differ a lot according to the brain’s reaction to each one. It doesn’t matter the difficulty of each question as difficulty didn’t affect your brain’s reaction. The two types showed a great difference in the response from the brain as each type has certain areas required for solving them. What makes this brain mysterious is how the certain regions that are set to do a certain task, for example, solving a mathematics problem collaborate together forming a network that can be responsible for solving this problem.
VI. References