Rusty  Shanahan

Rusty Shanahan

1595193480

Autonomous Boats, Autonomous Tech world

Today, we’ll talk about something that is barely covered in the Autonomous Tech world…

It’s also something I’m not familiar with, so I’ll just be able to talk about it. Technology has enabled us to build self-driving cars, but actually, the most impactful changes will be around flying cars, and sea navigation.


This article is based on an email I wrote to my mailing list — The original email is called “Startup Idea — Autonomous Boats”; it’s a concept where I explore the necessary steps we’ll need to build a startup around a specific topic.


Today, let’s take a look at Autonomous Boats.

How to start?

If we’re to build a startup, you must solve a problem that is Difficult, Urgent, and Recognized. After searching on Google, I found out that the biggest problem around boats is collisions.

Does that surprise you?

Over 90% of the world’s trade is carried by sea, the whole world is relying on the oceans, and the country which is the most concerned is the US. The U.S. economy is accounting for more than $352 billion in GDP and 3.1 million jobs, according to the National Oceanic and Atmospheric Administration (NOAA).

The biggest risk of collisions is at the port, or in crowded places.

Here is what happened between 2011 and 2016, in Europe only.

Image for post

I took this picture in a report from EMEA, just take a minute to read it here.

It’s full of fascinating facts.

Now, let’s see more precisely what we’re talking about.

Image for post

Today’s article is about solving this particular problem.

This problem costs millions (in reparations, and lost cargo) and continues to happen, that is growing with the number of boats. It’s also a problem we all acknowledged watching TITANIC sink.

It’s time we solve it.

Image for post


Autonomous Boats

Avoiding an accident when you’re driving a car is easy, you just turn the wheel a bit. Avoiding a cargo accident is much harder. Although I’ve never been in a cargo accident, my father has his boat license and I know how hard it is to maneuver in emergency situations.

Definition

So, how do we make boats autonomous? To be able to understand this, we must first define what“autonomous” means.

In self-driving vehicles, autonomy is measured by concrete facts.

In my self-driving car course, I describe the 5 levels of autonomy and how to target your market by selecting one of these levels.

Image for post

_In a self-driving car, autonomy is easily measured using the driver’s involvement, and the range of use-cases covered. _In autonomous boats, we only measure autonomy using 3 criterias:

  • No Autonomy — The current situation
  • Half Autonomy — Autonomous with a crew on board.
  • Full Autonomy — Autonomous, no crew on board.

I won’t talk about the remote-controlled boats, I wouldn’t know what to do with it.

**Autonomy is either on or off; the only criteria we’re using is the presence or absence of the crew. **In most my emails and articles, I’m telling you about the 4 pillars that make vehicles autonomous. It really applies everywhere, even in boats.

Let’s review how these 4 pillars work in autonomous boats.

#deep-learning #autonomous-cars #machine-learning #self-driving-cars #deep learning

What is GEEK

Buddha Community

Autonomous Boats, Autonomous Tech world
Rusty  Shanahan

Rusty Shanahan

1595193480

Autonomous Boats, Autonomous Tech world

Today, we’ll talk about something that is barely covered in the Autonomous Tech world…

It’s also something I’m not familiar with, so I’ll just be able to talk about it. Technology has enabled us to build self-driving cars, but actually, the most impactful changes will be around flying cars, and sea navigation.


This article is based on an email I wrote to my mailing list — The original email is called “Startup Idea — Autonomous Boats”; it’s a concept where I explore the necessary steps we’ll need to build a startup around a specific topic.


Today, let’s take a look at Autonomous Boats.

How to start?

If we’re to build a startup, you must solve a problem that is Difficult, Urgent, and Recognized. After searching on Google, I found out that the biggest problem around boats is collisions.

Does that surprise you?

Over 90% of the world’s trade is carried by sea, the whole world is relying on the oceans, and the country which is the most concerned is the US. The U.S. economy is accounting for more than $352 billion in GDP and 3.1 million jobs, according to the National Oceanic and Atmospheric Administration (NOAA).

The biggest risk of collisions is at the port, or in crowded places.

Here is what happened between 2011 and 2016, in Europe only.

Image for post

I took this picture in a report from EMEA, just take a minute to read it here.

It’s full of fascinating facts.

Now, let’s see more precisely what we’re talking about.

Image for post

Today’s article is about solving this particular problem.

This problem costs millions (in reparations, and lost cargo) and continues to happen, that is growing with the number of boats. It’s also a problem we all acknowledged watching TITANIC sink.

It’s time we solve it.

Image for post


Autonomous Boats

Avoiding an accident when you’re driving a car is easy, you just turn the wheel a bit. Avoiding a cargo accident is much harder. Although I’ve never been in a cargo accident, my father has his boat license and I know how hard it is to maneuver in emergency situations.

Definition

So, how do we make boats autonomous? To be able to understand this, we must first define what“autonomous” means.

In self-driving vehicles, autonomy is measured by concrete facts.

In my self-driving car course, I describe the 5 levels of autonomy and how to target your market by selecting one of these levels.

Image for post

_In a self-driving car, autonomy is easily measured using the driver’s involvement, and the range of use-cases covered. _In autonomous boats, we only measure autonomy using 3 criterias:

  • No Autonomy — The current situation
  • Half Autonomy — Autonomous with a crew on board.
  • Full Autonomy — Autonomous, no crew on board.

I won’t talk about the remote-controlled boats, I wouldn’t know what to do with it.

**Autonomy is either on or off; the only criteria we’re using is the presence or absence of the crew. **In most my emails and articles, I’m telling you about the 4 pillars that make vehicles autonomous. It really applies everywhere, even in boats.

Let’s review how these 4 pillars work in autonomous boats.

#deep-learning #autonomous-cars #machine-learning #self-driving-cars #deep learning

nova aleese

1596613238

Airbnb for Boats App | Boat Rental Script | Yacht Rental Script

Entrepreneurs who are looking for a potential breakthrough can invest wisely in the yacht rental app development. At Appdupe, we offer custom-built clone app solutions that are highly reliable. Our Airbnb for boats clone comes with cutting-edge features like two-factor authentication, in-app chat/call, etc. Besides, our solutions are highly scalable, enhancing your platform’s sustainability.

#boat rental app development #yacht rental app development #airbnb clone for boats #boat rental app #yacht rental app

Rusty  Shanahan

Rusty Shanahan

1598914260

Big Tech has thrived during the global pandemic

As the coronavirus pandemic continues to sweep across the United States, businesses in numerous sectors have been devastated by the accompanying economic fallout.

However, the country’s biggest tech companies have proven to be more resilient.

With much of the world stuck at home under stay-at-home orders, people have been more reliant on technology than ever before.

With many retail stores closed and consumers opting not to go to grocery stores, Amazon has had to deal with a significant increase in demand and has hired over 100,000 warehouse workers to help do so.

Cloud computing services offered by Microsoft, Google and Amazon have also seen an uptick in demand with millions across the country working from home.

Facebook’s shares rose 10% in April, recovering from a slump caused by a decline in ad revenue. Netflix saw substantial growth, adding 16 million new subscribers in the first three months of the year, which doubled the company’s predicted quarterly growth for 2020.

Big Tech has also used their resources to help solve challenges caused by the pandemic.

Microsoft donated 240,000 surgical masks and 3,000 bottles of hand sanitizer to Washington state, in addition to other supplies. The company also donated 114,000 packed lunches which were distributed to schools, low income housing centers and food banks.

With contact tracing a crucial part of health authorities’ efforts to limit the spread of the disease, Amazon and Google have partnered to build a COVID-19 contract tracing system for iOS and Android mobile phones that lets users know when they have come into contact with someone who’s tested positive for the virus.

#covid19 #tech #business #world #technology

Sasha  Lee

Sasha Lee

1650643200

Tech Ml Dataset: A Clojure Library for Data Processing and ML

tech.ml.dataset

tech.ml.dataset is a Clojure library for data processing and machine learning. Datasets are currently in-memory columnwise databases and we support parsing from file or input-stream. We support these formats: raw/gzipped csv/tsv, xls, xlsx, json, and sequences of maps as input sources. SQL and Clojurescript bindings are provided as separate libraries.

Data size in memory is minimized (primitive arrays), datetime types are often converted to an integer representation and strings are loaded into string tables. These features together dramatically decrease the working set size in memory. Because data is stored in columnar fashion columnwise operations on the dataset are very fast.

Conversion back into sequences of maps is very efficient and we have support for writing the dataset back out to csv, tsv, and gzipped varieties of those.

We have upgraded support for Apache Arrow. We have full support including mmap support for JDK-8->JDK-17 although if you are on an M-1 Mac you will need to use JDK-17. We also support per-column compression (LZ4, ZSTD) across all supported platforms. The official Arrow SDK does not support mmap, JDK-17, and has no user-accessible way to save a compressed streaming format file.

Large aggregations of potentially out-of-memory datasets are represented by a sequence of datasets. This is consistent with the design of the parquet and arrow data storage systems and aggregation operations involving large-scale datasets are efficiently implemented in the tech.v3.dataset.reductions namespace. We have started to integrate algorithms from the Apache Data Sketches system in the apache-data-sketch namespace. Summations/means in this area are implemented using the Kahan compensated summation algorithm.

Mini Walkthrough

user> (require '[tech.v3.dataset :as ds])
nil
;; We support many file formats
user> (def csv-data (ds/->dataset "https://github.com/techascent/tech.ml.dataset/raw/master/test/data/stocks.csv"))
#'user/csv-data
user> (ds/head csv-data)
test/data/stocks.csv [5 3]:

| symbol |       date | price |
|--------|------------|-------|
|   MSFT | 2000-01-01 | 39.81 |
|   MSFT | 2000-02-01 | 36.35 |
|   MSFT | 2000-03-01 | 43.22 |
|   MSFT | 2000-04-01 | 28.37 |
|   MSFT | 2000-05-01 | 25.45 |

;; tech.v3.libs.poi registers xls, tech.v3.libs.fastexcel registers xlsx.  If you want
;; to use poi for everything use workbook->datasets in the tech.v3.libs.poi namespace.
user> (require '[tech.v3.libs.poi])
nil
user> (def xls-data (ds/->dataset "https://github.com/techascent/tech.ml.dataset/raw/master/test/data/file_example_XLS_1000.xls"))
#'user/xls-data
user> (ds/head xls-data)
https://github.com/techascent/tech.v3.dataset/raw/master/test/data/file_example_XLS_1000.xls [5 8]:

| column-0 | First Name | Last Name | Gender |       Country |  Age |       Date |     Id |
|----------|------------|-----------|--------|---------------|------|------------|--------|
|      1.0 |      Dulce |     Abril | Female | United States | 32.0 | 15/10/2017 | 1562.0 |
|      2.0 |       Mara | Hashimoto | Female | Great Britain | 25.0 | 16/08/2016 | 1582.0 |
|      3.0 |     Philip |      Gent |   Male |        France | 36.0 | 21/05/2015 | 2587.0 |
|      4.0 |   Kathleen |    Hanner | Female | United States | 25.0 | 15/10/2017 | 3549.0 |
|      5.0 |    Nereida |   Magwood | Female | United States | 58.0 | 16/08/2016 | 2468.0 |

;;And you have fine grained control over parsing

user> (ds/head (ds/->dataset "https://github.com/techascent/tech.ml.dataset/raw/master/test/data/file_example_XLS_1000.xls"
                             {:parser-fn {"Date" [:local-date "dd/MM/yyyy"]}}))
https://github.com/techascent/tech.v3.dataset/raw/master/test/data/file_example_XLS_1000.xls [5 8]:

| column-0 | First Name | Last Name | Gender |       Country |  Age |       Date |     Id |
|----------|------------|-----------|--------|---------------|------|------------|--------|
|      1.0 |      Dulce |     Abril | Female | United States | 32.0 | 2017-10-15 | 1562.0 |
|      2.0 |       Mara | Hashimoto | Female | Great Britain | 25.0 | 2016-08-16 | 1582.0 |
|      3.0 |     Philip |      Gent |   Male |        France | 36.0 | 2015-05-21 | 2587.0 |
|      4.0 |   Kathleen |    Hanner | Female | United States | 25.0 | 2017-10-15 | 3549.0 |
|      5.0 |    Nereida |   Magwood | Female | United States | 58.0 | 2016-08-16 | 2468.0 |
user>


;;Loading from the web is no problem
user>
user> (def airports (ds/->dataset "https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat"
                                  {:header-row? false :file-type :csv}))
#'user/airports
user> (ds/head airports)
https://raw.githubusercontent.com/jpatokal/openflights/master/data/airports.dat [5 14]:

| column-0 |                                    column-1 |     column-2 |         column-3 | column-4 | column-5 |    column-6 |     column-7 | column-8 | column-9 | column-10 |            column-11 | column-12 |   column-13 |
|----------|---------------------------------------------|--------------|------------------|----------|----------|-------------|--------------|----------|----------|-----------|----------------------|-----------|-------------|
|        1 |                              Goroka Airport |       Goroka | Papua New Guinea |      GKA |     AYGA | -6.08168983 | 145.39199829 |     5282 |     10.0 |         U | Pacific/Port_Moresby |   airport | OurAirports |
|        2 |                              Madang Airport |       Madang | Papua New Guinea |      MAG |     AYMD | -5.20707989 | 145.78900147 |       20 |     10.0 |         U | Pacific/Port_Moresby |   airport | OurAirports |
|        3 |                Mount Hagen Kagamuga Airport |  Mount Hagen | Papua New Guinea |      HGU |     AYMH | -5.82678986 | 144.29600525 |     5388 |     10.0 |         U | Pacific/Port_Moresby |   airport | OurAirports |
|        4 |                              Nadzab Airport |       Nadzab | Papua New Guinea |      LAE |     AYNZ | -6.56980300 | 146.72597700 |      239 |     10.0 |         U | Pacific/Port_Moresby |   airport | OurAirports |
|        5 | Port Moresby Jacksons International Airport | Port Moresby | Papua New Guinea |      POM |     AYPY | -9.44338036 | 147.22000122 |      146 |     10.0 |         U | Pacific/Port_Moresby |   airport | OurAirports |

;;At any point you can get a sequence of maps back.  We implement a special version
;;of Clojure's APersistentMap that is much more efficient than even records and shares
;;the backing store with the dataset.

user> (take 2 (ds/mapseq-reader csv-data))
({"date" #object[java.time.LocalDate 0x4a998af0 "2000-01-01"],
  "symbol" "MSFT",
  "price" 39.81}
 {"date" #object[java.time.LocalDate 0x6d8c0bcd "2000-02-01"],
  "symbol" "MSFT",
  "price" 36.35})

;;Datasets are comprised of named columns, and provide a Clojure hashmap-compatible
;;collection.  Datasets allow reading and updating column data associated with a column name,
;;and provide a sequential view of [column-name column] entries.

;;You can look up columns via `get`, keyword lookup, and invoking the dataset as a function on
;;a key (a column name). `keys` and `vals` retrieve respective sequences of column names and columns.
;;The functions `assoc` and `dissoc` work to define new associations to conveniently
;;add, update, or remove columns, with add/update semantics defined by`tech.v3.dataset/add-or-update-column`.

;;Column data is stored in primitive arrays (even most datetimes!) and strings are stored
;;in string tables.  You can load really large datasets with this thing!

;;Columns themselves are sequences of their entries.
user> (csv-data "symbol")
#tech.v3.dataset.column<string>[560]
symbol
[MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, ...]
user> (xls-data "Gender")
#tech.v3.dataset.column<string>[1000]
Gender
[Female, Female, Male, Female, Female, Male, Female, Female, Female, Female, Female, Male, Female, Male, Female, Female, Female, Female, Female, Female, ...]
user> (take 5 (xls-data "Gender"))
("Female" "Female" "Male" "Female" "Female")


;;Datasets and columns implement the clojure metadata interfaces (`meta`, `with-meta`, `vary-meta`)

;;You can access a sequence of columns of a dataset with `ds/columns`, or `vals` like a map,
;;and access the metadata with `meta`:

user> (->> csv-data
           vals  ;synonymous with ds/columns
           (map (fn [column]
                  (meta column))))
({:categorical? true, :name "symbol", :size 560, :datatype :string}
 {:name "date", :size 560, :datatype :packed-local-date}
 {:name "price", :size 560, :datatype :float32})

;;We can similarly destructure datasets like normal clojure
;;maps:

user> (for [[k column] csv-data]
        [k (meta column)])
(["symbol" {:categorical? true, :name "symbol", :size 560, :datatype :string}]
 ["date" {:name "date", :size 560, :datatype :packed-local-date}]
 ["price" {:name "price", :size 560, :datatype :float64}])

user> (let [{:strs [symbol date]} csv-data]
        [symbol (meta date)])
[#tech.v3.dataset.column<string>[560]
symbol
[MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, MSFT, ...]
 {:name "date", :size 560, :datatype :packed-local-date}]

;;We can get a brief description of the dataset:

user> (ds/brief csv-data)
({:min #object[java.time.LocalDate 0x5b2ea1d5 "2000-01-01"],
  :n-missing 0,
  :col-name "date",
  :mean #object[java.time.LocalDate 0x729b7395 "2005-05-12"],
  :datatype :packed-local-date,
  :quartile-3 #object[java.time.LocalDate 0x6c75fa43 "2007-11-23"],
  :n-valid 560,
  :quartile-1 #object[java.time.LocalDate 0x13d9aabe "2002-11-08"],
  :max #object[java.time.LocalDate 0x493bf7ef "2010-03-01"]}
 {:min 5.97,
  :n-missing 0,
  :col-name "price",
  :mean 100.7342857142857,
  :datatype :float64,
  :skew 2.4130946430619233,
  :standard-deviation 132.55477114107083,
  :quartile-3 100.88,
  :n-valid 560,
  :quartile-1 24.169999999999998,
  :max 707.0}
 {:mode "MSFT",
  :values ["MSFT" "AMZN" "IBM" "AAPL" "GOOG"],
  :n-values 5,
  :n-valid 560,
  :col-name "symbol",
  :n-missing 0,
  :datatype :string,
  :histogram (["MSFT" 123] ["AMZN" 123] ["IBM" 123] ["AAPL" 123] ["GOOG" 68])})

;;Another view of that brief:

user> (ds/descriptive-stats csv-data)
https://github.com/techascent/tech.v3.dataset/raw/master/test/data/stocks.csv: descriptive-stats [3 10]:

| :col-name |          :datatype | :n-valid | :n-missing |       :min |      :mean | :mode |       :max | :standard-deviation |      :skew |
|-----------|--------------------|----------|------------|------------|------------|-------|------------|---------------------|------------|
|      date | :packed-local-date |      560 |          0 | 2000-01-01 | 2005-05-12 |       | 2010-03-01 |                     |            |
|     price |           :float64 |      560 |          0 |      5.970 |      100.7 |       |      707.0 |        132.55477114 | 2.41309464 |
|    symbol |            :string |      560 |          0 |            |            |  MSFT |            |                     |            |


;;There are analogues of the clojure.core functions that apply to dataset:
;;filter, group-by, sort-by.  These are all implemented efficiently.

;;You can add/remove/update columns, or use the map idioms of `assoc` and `dissoc`

user> (-> csv-data
          (assoc "always-ten" 10) ;scalar values are expanded as needed
          (assoc "random"   (repeatedly (ds/row-count csv-data) #(rand-int 100)))
          ds/head)
https://github.com/techascent/tech.v3.dataset/raw/master/test/data/stocks.csv [5 5]:

| symbol |       date | price | always-ten | random |
|--------|------------|-------|------------|--------|
|   MSFT | 2000-01-01 | 39.81 |         10 |     47 |
|   MSFT | 2000-02-01 | 36.35 |         10 |     35 |
|   MSFT | 2000-03-01 | 43.22 |         10 |     54 |
|   MSFT | 2000-04-01 | 28.37 |         10 |      6 |
|   MSFT | 2000-05-01 | 25.45 |         10 |     52 |

user> (-> csv-data
          (dissoc "price")
          ds/head)
https://github.com/techascent/tech.v3.dataset/raw/master/test/data/stocks.csv [5 2]:

| symbol |       date |
|--------|------------|
|   MSFT | 2000-01-01 |
|   MSFT | 2000-02-01 |
|   MSFT | 2000-03-01 |
|   MSFT | 2000-04-01 |
|   MSFT | 2000-05-01 |


;;since `conj` works as with clojure maps and sequences of map-entries or pairs,
;;you can use idioms like `reduce conj` or `into` to construct new datasets on the
;;fly with familiar clojure idioms:

user> (let [new-cols [["always-ten" 10] ["new-price" (map inc (csv-data "price"))]]
            new-data (into (dissoc csv-data "price") new-cols)]
            (ds/head new-data))
https://github.com/techascent/tech.v3.dataset/raw/master/test/data/stocks.csv [5 4]:

| symbol |       date | always-ten | new-price |
|--------|------------|------------|-----------|
|   MSFT | 2000-01-01 |         10 |     40.81 |
|   MSFT | 2000-02-01 |         10 |     37.35 |
|   MSFT | 2000-03-01 |         10 |     44.22 |
|   MSFT | 2000-04-01 |         10 |     29.37 |
|   MSFT | 2000-05-01 |         10 |     26.45 |

;;You can write out the result back to csv, tsv, and gzipped variations of those.

;;Joins (left, right, inner) are all implemented.

;;Columnwise arithmetic manipulations (+,-, and many more) are provided via the
;;tech.v2.datatype.functional namespace.

;;Datetime columns can be operated on - plus,minus, get-years, get-days, and
;;many more - uniformly via the tech.v2.datatype.datetime.operations namespace.

;;There is much more.  Please checkout the walkthough and try it out!

Arrow Support

JDK-17, compression and memory mapping are supported - Arrow api.

Parquet Support

Parquet now has first class support. That means we should be able to load most Parquet files and support their full range of datatypes.

More Documentation

Questions, Community

Further Reading


Author: techascent
Source Code: https://github.com/techascent/tech.ml.dataset
License: EPL-1.0 License

#machine-learning 

Latest Technology Solution Development - WebClues Infotech

Latest IT Tech Solution Development Company

The technology in the IT sector is rapidly growing with everything in the world moving online to make users life easy with it. This development in technology has allowed critical industries to also move online with technologies like blockchain, Artificial intelligence, Cloud Computing, Big Data Service, etc.

Want to use the latest technologies in IT for your business?

WebClues Infotech with its policy to train employees with the latest technologies like Blockchain, Wearables app, Chatbot app, AI and many more is the leader in the development of those technologies. With a highly-skilled team of 120+ people there can be no better option for your development requirements in the latest techs.

Want to know more about the technologies we provide solutions in?

Visit: https://www.webcluesinfotech.com/latest-technology-development/

Share your requirements https://www.webcluesinfotech.com/contact-us/

View Portfolio https://www.webcluesinfotech.com/portfolio/

#latest it tech solution development company #it tech solution development company #it tech solution #technology solution development #it path solutions #tech solution india