1676705588
Stellarium is an open source planetarium that enables you to explore the night sky right on your own computer.
If you've seen the news recently, you'll have noticed many articles on the comet with the green tail (C/2022 E3) that will only come once in a lifetime for everyone to see.
In theory, you can see the comet through a telescope across several days, but I don't own a telescope. So my window to see the comet was down to one night when it would be visible without any special equipment.
As of writing this article, that would be tonight.
But as I write this article, there's full cloud coverage and rain.
So there's no comet sighting for me, at least not in real life. I looked up some pictures. I discovered Stellarium, an amazing open source planetarium for your computer! This article is all about this open source way to view the universe.
Stellarium is a free, open source planetarium software for every OS. It's an amazing 3D simulation of the night sky that lets you explore and observe stars, constellations, planets, and other celestial objects. What's really cool about it is that it has a comprehensive database of stars, nebulae, galaxies, and so much more. Also, the software allows you to view the sky from any location on Earth at any time in the past or future, so you can look at a future night sky your descendants may see. Stellarium also includes advanced features such as telescopic controls and the ability to track the motion of celestial objects. Amateur astronomers and educators use Stellarium to explore the sky, and it is considered one of the best planetarium programs available.
This is the coolest part. With the most accurate database, you get some super high-quality maps in the most realistic way possible. You can create customized views and pick a time and location of what you want to see. If you think, "It really can't be that realistic," you would be wrong. It also simulates the atmosphere. Your view will look as if you were standing on the ground staring at the sky yourself. There's an option to act like you're looking through a telescope which is just as real as you doing it yourself from your backyard. It also includes a scripting language that allows you to create animated events—great if you're teaching or just want to mess around. Finally, it's available in a wide range of languages, meaning almost anyone can use it.
Nope, you don't, and I'll give you a bit of a demo on how to use the web version.
First, get to the website. It will show you the sky. It asks for your location on your browser window. From where I was at this point of writing, I lucked out and caught the International Space Station (ISS) hanging out on my screen. It didn't last long, so I'm glad I grabbed it while I did.
(Jess Cherry, CC BY-SA 4.0)
I'll explain those buttons from right to left.
First on the list is the Full Screen button. If you want to sit and stare into space (pun intended), click that square, and you're full screen ahoy!
Next is a Night Mode button. Use this in the dark so your night vision won't be interfered with. I wasn't using this at night, so it looks rather odd.
(Jess Cherry, CC BY-SA 4.0)
The Deep Sky Objects button is next, which is what those tiny circles are on the screen. When you click within a circle, it shows detailed information about that object. It also includes a link to additional resources if you want to research further.
(Jess Cherry, CC BY-SA 4.0)
Additional deep sky details.
(Jess Cherry, CC BY-SA 4.0)
There is the Equatorial Grid button. This coordinate system is based on two measurements similar to latitude and longitude but made from right ascension and declination. Right ascension is measured in hours, minutes, and seconds eastward along the celestial equator (which is equal to 24 hours) and is actually the known equator on every globe you see. I could go into further detail, but I suggest checking out resources from your local college or NASA. Here's what happens when you click that button.
Select the free grid, which centers right on Polaris, more commonly known as the North Star.
(Jess Cherry, CC BY-SA 4.0)
Additional details are shown.
(Jess Cherry, CC BY-SA 4.0)
The next button over is the Azimuthal Grid. This grid type is based on altitude and azimuth relative to your location. Azimuth is an angular measurement in decimal degrees. Again, check with your local planetarium, college, educators, and NASA for more details.
(Jess Cherry, CC BY-SA 4.0)
This picture shows the location I'm technically "standing" and gives a good view.
(Jess Cherry, CC BY-SA 4.0)
Next is the Landscape button. When you click it, the scenic landscape disappears.
(Jess Cherry, CC BY-SA 4.0)
I liked the farm better, so I intend to turn it back on.
Remember I explained there's a realistic view of how the atmosphere visually interferes? Well, it's been on the entire time in my screenshots. You've seen the refractional light providing a filter like you would any other night. The Atmosphere button removes that. I turned off that visual in the following screenshot to show you a whole new world.
(Jess Cherry, CC BY-SA 4.0)
And as you can see, a whole new level of the night sky opens up. While in theory, this would be amazing to see, we need oxygen, so the atmosphere needs to stay. However, I'll leave this button off for the remaining images in the tour.
Another entertaining button (to me, at least) is the Constellation Art button. You may be thinking, "Ok, cool, we get to see those lines in the sky like every other map I've seen." Nope! You get to see some pretty amazing artwork based on all those historical myths you've read about.
(Jess Cherry, CC BY-SA 4.0)
This is a fantastic function, and I love the detail. However, the Constellations button displays the traditional constellation lines if you prefer that view. Aside from the lines, it also includes names. When you click those names, you get information and the picture shown in the above artwork.
(Jess Cherry, CC BY-SA 4.0)
(Jess Cherry, CC BY-SA 4.0)
So now that I introduced the buttons, I'll show the other features, starting with movement on the screen. It's a simple click and drag to change your view of the sky. When you remove the landscape, you can look down and see everything below you. There's a light-colored fog circle around your view, which is the celestial equator.
Earlier, I mentioned some cool telescope-related features. You can easily manage these. Start by clicking an object, star, or planet and using a few buttons. I'll click the Moon. Underneath the description, you'll see what looks like a target with a star in the middle.
(Jess Cherry, CC BY-SA 4.0)
Click that target icon to get a set of buttons to zoom in and out.
(Jess Cherry, CC BY-SA 4.0)
I zoomed in with the plus (+) button to see what I would get. The picture below appeared after I clicked the plus button about five times.
(Jess Cherry, CC BY-SA 4.0)
There's also a link button next to those zoom buttons allowing me to share this view at the exact time and date I wrote this article.
Since this is open source software, you can contribute in whatever way you can. See these contributing guidelines, ask to commit code via pull request (PR), report problems, or become a backer or sponsor of the project by donating here.
I stumbled on this amazing project while wandering the internet in search of something. I mentioned Stellarium is installable on every OS. In a future article, I'll cover installing and using it on a Raspberry Pi. I'm thinking of titling it, "The galaxy in your hands," but I'm always open to title suggestions. Feel free to leave a comment with anything else you want to read about. Hopefully, you learned something new and will enjoy the web version of Stellarium. And maybe you just "spaced out" at work for a few minutes.
Original article source at: https://opensource.com/
1676516760
ruptures
is a Python library for off-line change point detection. This package provides methods for the analysis and segmentation of non-stationary signals. Implemented algorithms include exact and approximate detection for various parametric and non-parametric models. ruptures
focuses on ease of use by providing a well-documented and consistent interface. In addition, thanks to its modular structure, different algorithms and models can be connected and extended within this package.
(Please refer to the documentation for more advanced use.)
The following snippet creates a noisy piecewise constant signal, performs a penalized kernel change point detection and displays the results (alternating colors mark true regimes and dashed lines mark estimated change points).
import matplotlib.pyplot as plt
import ruptures as rpt
# generate signal
n_samples, dim, sigma = 1000, 3, 4
n_bkps = 4 # number of breakpoints
signal, bkps = rpt.pw_constant(n_samples, dim, n_bkps, noise_std=sigma)
# detection
algo = rpt.Pelt(model="rbf").fit(signal)
result = algo.predict(pen=10)
# display
rpt.display(signal, bkps, result)
plt.show()
Concerning this package, its use and bugs, use the issue page of the ruptures repository. For other inquiries, you can contact me here.
Installation instructions can be found here.
See the changelog for a history of notable changes to ruptures
.
How to cite. If you use ruptures
in a scientific publication, we would appreciate citations to the following paper:
Author: Deepcharles
Source Code: https://github.com/deepcharles/ruptures
License: BSD-2-Clause license
1675546560
Just as with software development, research under Horizon Europe promotes the adoption of sharing research outputs as early and widely as possible to citizen science, developing new indicators for evaluation research, and rewarding researchers.
Horizon Europe emphasizes open science and open source technology. The program evolved from Horizon 2020, which provided financial support for research projects that promoted industrial competitiveness, advanced scientific excellence, or solved social challenges through the process of "open science."
Open science is an approach to the scientific process based on open cooperative work, tools, and diffusing knowledge found in the Horizon Europe Regulation and Model Grant Agreement. This open science approach aligns with open source principles that provide a structure for such cooperation.
The open source principles are:
In creating open source software, one of the basic foundational principles of open source software development is an "upstream first" philosophy. The opposite direction is "downstream," and upstream and downstream make up the ecosystem for a given software package or distribution. Upstreams are important because that's where the source contribution comes from.
Each upstream is unique, but generally, the upstream is where decisions are made and where the community for a project collaborates for the project's objectives. Work done upstream can flow out to many other open source projects. The upstream is also a place where developers can report bugs and security vulnerabilities. If a bug or security flaw is fixed upstream, then every downstream project or product based on the upstream can benefit from that work.
It is important to contribute to the work side-by-side with the rest of the community from which you benefit. By working upstream first, there is the opportunity to vet ideas with the larger community and work together to build new features, releases, content, etc. It's far better if all the contributors work together rather than contributors from different companies, universities, or affiliations working on features behind closed doors and then trying to integrate them later. Open source contributions can outlive the research project duration making a more durable impact.
As an example of such contributions, in the ORBIT FP7 EU project, a feature was developed by Red Hat (lower layers, such as Linux Kernel and QEMU) and Umea University (upper layers, such as LibVirt and OpenStack) and contributed to their related upstream communities. This enabled "post-copy live migration of VMs" in OpenStack. Even though that was done several years ago, that feature is still available (and independently maintained) in any OpenStack distribution today (as well as plain LibVirt and QEMU).
Just as with software development, research under Horizon Europe promotes the adoption of sharing research outputs as early and widely as possible to citizen science, developing new indicators for evaluation research, and rewarding researchers. With open source upstream communities, the research contributed can extend beyond the research project timeline by feeding into the upstream life cycle. This allows future consumption by companies, universities, governments, etc., to evolve and further secure the research's
Original article source at: https://opensource.com/
1674097200
DeepVariant is a deep learning-based variant caller that takes aligned reads (in BAM or CRAM format), produces pileup image tensors from them, classifies each tensor using a convolutional neural network, and finally reports the results in a standard VCF or gVCF file.
DeepVariant supports germline variant-calling in diploid organisms.
Please also note:
DeepTrio is a deep learning-based trio variant caller built on top of DeepVariant. DeepTrio extends DeepVariant's functionality, allowing it to utilize the power of neural networks to predict genomic variants in trios or duos. See this page for more details and instructions on how to run DeepTrio.
DeepTrio supports germline variant-calling in diploid organisms for the following types of input data:
Please also note:
We recommend using our Docker solution. The command will look like this:
BIN_VERSION="1.4.0"
docker run \
-v "YOUR_INPUT_DIR":"/input" \
-v "YOUR_OUTPUT_DIR:/output" \
google/deepvariant:"${BIN_VERSION}" \
/opt/deepvariant/bin/run_deepvariant \
--model_type=WGS \ **Replace this string with exactly one of the following [WGS,WES,PACBIO,HYBRID_PACBIO_ILLUMINA]**
--ref=/input/YOUR_REF \
--reads=/input/YOUR_BAM \
--output_vcf=/output/YOUR_OUTPUT_VCF \
--output_gvcf=/output/YOUR_OUTPUT_GVCF \
--num_shards=$(nproc) \ **This will use all your cores to run make_examples. Feel free to change.**
--logging_dir=/output/logs \ **Optional. This saves the log output for each stage separately.
--dry_run=false **Default is false. If set to true, commands will be printed out but not executed.
To see all flags you can use, run: docker run google/deepvariant:"${BIN_VERSION}"
If you're using GPUs, or want to use Singularity instead, see Quick Start for more details or see all the setup options available.
For more information, also see:
If you're using DeepVariant in your work, please cite:
A universal SNP and small-indel variant caller using deep neural networks. Nature Biotechnology 36, 983–987 (2018).
Ryan Poplin, Pi-Chuan Chang, David Alexander, Scott Schwartz, Thomas Colthurst, Alexander Ku, Dan Newburger, Jojo Dijamco, Nam Nguyen, Pegah T. Afshar, Sam S. Gross, Lizzie Dorfman, Cory Y. McLean, and Mark A. DePristo.
doi: https://doi.org/10.1038/nbt.4235
Additionally, if you are generating multi-sample calls using our DeepVariant and GLnexus Best Practices, please cite:
Accurate, scalable cohort variant calls using DeepVariant and GLnexus. Bioinformatics (2021).
Taedong Yun, Helen Li, Pi-Chuan Chang, Michael F. Lin, Andrew Carroll, and Cory Y. McLean.
doi: https://doi.org/10.1093/bioinformatics/btaa1081
(1): Time estimates do not include mapping.
For more information on the pileup images and how to read them, please see the "Looking through DeepVariant's Eyes" blog post.
DeepVariant relies on Nucleus, a library of Python and C++ code for reading and writing data in common genomics file formats (like SAM and VCF) designed for painless integration with the TensorFlow machine learning framework. Nucleus was built with DeepVariant in mind and open-sourced separately so it can be used by anyone in the genomics research community for other projects. See this blog post on Using Nucleus and TensorFlow for DNA Sequencing Error Correction.
Below are the official solutions provided by the Genomics team in Google Health.
Name | Description |
---|---|
Docker | This is the recommended method. |
Build from source | DeepVariant comes with scripts to build it on Ubuntu 20.04. To build and run on other Unix-based systems, you will need to modify these scripts. |
Prebuilt Binaries | Available at gs://deepvariant/ . These are compiled to use SSE4 and AVX instructions, so you will need a CPU (such as Intel Sandy Bridge) that supports them. You can check the /proc/cpuinfo file on your computer, which lists these features under "flags". |
Please open a pull request if you wish to contribute to DeepVariant. Note, we have not set up the infrastructure to merge pull requests externally. If you agree, we will test and submit the changes internally and mention your contributions in our release notes. We apologize for any inconvenience.
If you have any difficulty using DeepVariant, feel free to open an issue. If you have general questions not specific to DeepVariant, we recommend that you post on a community discussion forum such as BioStars.
DeepVariant happily makes use of many open source packages. We would like to specifically call out a few key ones:
We thank all of the developers and contributors to these packages for their work.
This is not an official Google product.
NOTE: the content of this research code repository (i) is not intended to be a medical device; and (ii) is not intended for clinical use of any kind, including but not limited to diagnosis or prognosis.
Author: Google
Source Code: https://github.com/google/deepvariant
License: BSD-3-Clause license
1672144980
A Mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.
Machine Learning is a subfield of computer science that gives computers the ability to learn without being explicitly programmed. It explores the study and construction of algorithms that can learn from and make predictions on data.
Machine Learning is as fascinating as it is broad in scope. It spans over multiple fields in Mathematics, Computer Science, and Neuroscience. This is an attempt to summarize this enormous field in one .PDF file.
Download the PDF here:
https://github.com/dformoso/machine-learning-mindmap/blob/master/Machine%20Learning.pdf
Same, but with a white background:
I've built the mindmap with MindNode for Mac. https://mindnode.com
This Mindmap/Cheatsheet has a companion Jupyter Notebook that runs through most of the Data Science steps that can be found at the following link:
Here's another mindmap which focuses only on Deep Learning
The Data Science it's not a set-and-forget effort, but a process that requires design, implementation and maintenance. The PDF contains a quick overview of what's involved. Here's a quick screenshot.
First, we'll need some data. We must find it, collect it, clean it, and about 5 other steps. Here's a sample of what's required.
Machine Learning is a house built on Math bricks. Browse through the most common components, and send your feedback if you see something missing.
A partial list of the types, categories, approaches, libraries, and methodology.
A sampling of the most popular models. Send your comments to add more.
I'm planning to build a more complete list of references in the future. For now, these are some of the sources I've used to create this Mindmap.
Stanford and Oxford Lectures. CS20SI, CS224d.
> Books:
> Deep Learning - Goodfellow.
> Pattern Recognition and Machine Learning - Bishop.
> The Elements of Statistical Learning - Hastie.
- Colah's Blog. http://colah.github.io
- Kaggle Notebooks.
- Tensorflow Documentation pages.
- Google Cloud Data Engineer certification materials.
- Multiple Wikipedia articles.
Author: dformoso
Source Code: https://github.com/dformoso/machine-learning-mindmap
License: Apache-2.0 license
1668588827
The GATE exam is designed to evaluate students' knowledge of areas like engineering and science, conducted every year. Every year around 9 lakh students appear for the exam but very few qualify because very few seats are available in M.tech colleges.
let us know in detail how to go about in this exam
1666955187
Physics Wallah is here to provide you with the revised syllabus for #class9 science. As one is aware of the fact that Science is an exciting subject. Students who want to take the Science stream in the 11th grade should have good instruction on the subject. Many of the topics students learn in the 9th grade will also be explained in the upper classes.
https://www.pw.live/syllabus-cbse-class-9/cbse-for-class-9-science
1664867264
Physics is the subject of experiments and research. The more we experiment the better way we learn it. So keeping in mind the importance of physics in our daily life we have provided a complete list of all physics articles for the students of all classes.
https://www.pw.live/physics-articles
#physics #newton #experience #physicsformulas #research #science
1663744170
The concept of measurement is the comparison of an object's physical characteristics to a reference. The Class 11 Physics Chapter 2 Notes provide a thorough understanding of all measurement types, units, dimensions, and measurement defects.
https://www.pw.live/physics-questions-mcq/units-and-measurements
#classes #physics #class11notes #ncertsolutions #science #measurement
1663321001
The phenomenon of change of matter form one state to another state and back to original state, by altering the conditions of temperature and pressure, is called the interconversion of matter.
https://www.pw.live/chapter-matter-is-our-surrounding-class-9/interconversion-of-states-of-matter
#physics #newton #solid #science #class #scientist #study #education
1663157252
WRITE THE MERITS OF BOHR’S THEORY
The experimental value of radii and energies in hydrogen atoms are in good agreement with that calculated on the basis of Bohr’s theory.
Continue Reading: https://www.pw.live/question-answer/write-the-merits-of-bohrs-theory-32886
1627132876
“Are you looking for a Data Science Course Training Institute in Bangalore?? ? We are a leading training institute that provides International credentials for training in Data Science. Our students get exposed to real-life experiences through live projects and plenty of assignments. With world-class faculty in the field of Data Science, learn to develop an understanding of Predictive and Descriptive Analytics and understand Data Structure and Data Manipulation. Join us if you are looking for a good training institute for Data Science in Bangalore to head start your career in this broad field of Data Science. We are located in HSR Layout, Bangalore. Feel free to get in touch with us for more information?? 1800 212 654321”
Click here for more details on Data Science Training in Bangalore
#data science certification in bangalore #data #science #training
1624971960
reticulate
package makes it easy to use the best of both — together!R and Python have many similarities and many differences. Most of the underlying concepts of data structures are very similar between the two languages, and there are many data science packages that now exist in both languages. But R is set up in a way that I would describe as ‘data first, application second’, whereas Python feels more application development driven from the outset. Javascript programmers, for example, would slot into Python a little quicker than they would slot into R, purely from a syntax and environment management point of view.
More and more I have been working in R and Python and I have come across situations where I’d like to use both together. This can happen for numerous reasons, but the most common one is that you are building something in R, and you need functionality that you or someone else has written previously in Python. Sure, you could rewrite it in R, but that’s not very DRY is it?
The reticulate
package in R allows you to execute Python code inside an R session. It’s been around for a few years actually, and has been improving more and more, so I wanted to type up a brief tutorial on how it works. It you are an R native, getting reticulate
up and running requires you to understand a little about how Python works — and how it typically does environment management — and so this tutorial may help you get it set up much quicker than if you tried to work it out yourself.
#data #programming #data-science #science #python #why choose between r and python?
1624717800
In mid-2020 OpenAI presented the all-powerful language system GPT-3. It revolutionized the world and landed headlines in very important media outlet magazines. This incredible technology can create fiction, poetry, music, code, and many other amazing things (I wrote a complete overview of GPT-3 for Towards Data Science if you want to check it out).
It was expected that other big tech companies wouldn’t fall behind. Indeed, some days ago at Google I/O annual conference, Google executives presented the last research and technologies of the big firm. One of them stole the show: LaMDA, a conversational AI capable of having human-like conversations.
In this article, I’m going to review the little we know today about this tech and how it works.
#technology #chatbots #science #artificial-intelligence #google’s lamda: the next generation of chatbots #next generation
1624601400
So if you’ve been following my content, you know I’ve been writing a lot about artificial intelligence. I’ve shown some of the positive and negative developments in this area, and how we should harness this immensely powerful technology for the commonwealth of man; and not exploit its use for evil, the way we historically have with nuclear weaponry.
Of course, I’m not the only one saying it. There are many thinkers and innovators who have been advocating for this. In line with that, today’s piece is all about the ethical principles that pundits feel should be programmed into AI, and developed with a clear view in mind moving forward.
I’ve loosely been basing these artificial intelligence articles around an incredible book entitled 2084 written by Professor John Lennox. Therein, he has submitted a number of generally applicable principles surrounding the development and production of such technology.
He has drawn these motions from the so-called Asilomar AI Principles which were drafted at a conference back in 2017 in – you guessed it – Asilomar, California. Apparently, this has been endorsed by over one-thousand AI researchers. Other supporters of it include the late great Stephen Hawking, Jaan Tallinn, and last but predictably not least, Elon Musk.
Before we review it though, why is such a declaration necessary? The answers to that question are subjective, imperfect, variable and relative. But that in no way diminishes its effect. One has only to take a truncated historical view to notice a salient human frailty. It’s this: mankind is foolish and hasty. Especially when we deliberate upon things in groups. Why else does the word deliberate imply persnickety, tediousness…meticulosity? Because it’s patient and omniscient. And that’s not really in our nature.
The rate at which we typically address the ethical concerns of technological progress will be our undoing. Brinksmanship and a lack of due care has brought us to the lip of extinction repeatedly. We can’t afford to repeat such folly!
Declarations of this magnitude are necessarily lofty and out of reach. That’s why they’re necessary. Without a vision and an iron-clad commitment, we are destined to fall far shorter of them had we espoused such ideals in the first place.
The principia Professor Lennox includes in his book is a collection of Asilomar’s most salient features. And here they are.
#artificial-intelligence #tech #technology #science