Tag Archives: technology

Neural lace – Innerscape comes one step closer!

Apologies but I’m high fiving myself like an idiot because of an article I just read in futurism.com:

https://futurism.com/within-the-next-decade-you-could-be-living-in-a-post-smartphone-world/

The whole article is interesting as it attempts to predict the near, medium and long term future of communications technology, but it was this paragraph that made me so happy:

This week, we got our first look at Neuralink, a new company cofounded by Musk with a goal of building computers into our brains by way of “neural lace,” a very early-stage technology that lays on your brain and bridges it to a computer. It’s the next step beyond even that blending of the digital and physical worlds, as human and machine become one.

The only thing I’m sceptical about is the time-frame. Tech that you carry and tech that you ‘wear’ is one thing, but tech that invades your brain is something else entirely. I’m sure there will be some maverick individuals who will ignore the risk and give the neural lace a try, but most of us will not jump in quite so quickly. Think desktop computers and the general public. The vast majority of people who use smartphones now either never learned to use computers properly or never felt comfortable with them – i.e. the gain did not negate the pain.

I think the concept of an in-built, brain-machine interface will be around for quite a while before some tech comes along that will make the interface, safe, painless and most of all, easy.

To me, easy is the operative word because, as a species, we always look for the line of least resistance. I just hope I’m still around when it happens as the next few decades are going to be very interesting indeed. 🙂

cheers

Meeks

 


-blush- ‘teledildonics’…

You should consider this a tech post with an R rating. You’ve been warned.

haptic-glove-2

http://fab.cba.mit.edu/classes/863.11/people/daniel.rosenberg/pf.html

Right. This really is a case of sci-fi made obsolete by reality. The image you’re looking at shows a pair of ‘haptic’ gloves at work. They allow the wearer to manipulate elements of a digital environment directly – i.e. no need for a mouse or keyboard or game controller. Essentially, sensors in the glove translate real world movement and pressure into digital movement and pressure.

I knew about these haptic gloves because I’m a gamer, and I like to think about new technologies that make gaming more fun. Not surprisingly then, my sci-fi story, Innerscape, contains many existing technologies, extrapolated into their possible future equivalents. One example is the evolution of the haptic glove into the full body gaming suit. But even modern day technology can be used in all sorts of ways. Most people see web cams and Skype as a useful tool for teleconferencing, or to allow friends to see each other and talk in real time. To the porn industry, however, the same technology is a great way to deliver a lucrative product.

Online porn is not something I know a great deal about, but it’s not something I can ignore, either. I do a lot of research online, and anything of a sexual nature can be bring up unexpected results – e.g., when I researched hermaphrodites for Vokhtah. I quickly learned to phrase my queries with great care, and that awareness informed my prediction that the porn industry would spear-head the development of immersive reality in Innerscape. Yes, I know, pun intended…

Despite this rather pragmatic view of the world, however, I had no idea that a real world company was already selling a primitive version of the immersive porn of my imagined future. What’s even worse, I had no idea that this real world company bears the same name [more or less] as a company I dreamed up for Innerscape.

[SPOILER: Leon lets the Woman in Red into his apartment when he sees that she’s delivering his brand new, top of the range, Real Touch gaming suit.]

The real world company already making haptic devices for the porn industry is called Realtouch Interactive.

I swear I am not making this up. I didn’t know about Realtouch Interactive until just now when I read about the latest developments in ‘haptic gloves’ on New Atlas. Imagine my surprise when the same article included a link to…’teledildonics’.

The link to that article is here:

http://newatlas.com/flex-n-feel-glove-long-distance-relationships/47900/?utm_source=Gizmag+Subscribers&utm_campaign=f1f477b260-UA-2235360-4&utm_medium=email&utm_term=0_65b67362bd-f1f477b260-92416841

You can find the link to ‘teledildonics’ yourselves. If you so wish. -cough-

Be warned though, in the article, a male writer test drives the ‘device’, and although the descriptions are not super graphic, they don’t leave too much to the imagination. Included in the article is information about how the company created its own tech in order to sync sight, sound and data. Just as I predicted!

I suppose this is the point at which I should explain why data has to be synced along with sight and sound. The haptic ‘device’ is hooked up to the computer via USB at the user’s end. At the ‘cam girl’ end, a slightly different device allows the professional lady to control the sensations sent to the user’s device. Thus, audio, video and the transfer of this haptic data has to occur at the same time or the effect is ruined.

Long term, however, this very same technology will drive something else I wrote about in Innerscape – teleoperations. This is where the surgeon and the patient are separated by long distances, but the surgeon can still operate via a robotic surgical tool.

I don’t know about you, but I’m feeling kind of shell-shocked. None of this technology was meant to happen for decades, yet here it is in 2017. Clearly, the tech will be enhanced and improved enormously in the coming years, but I still feel rather ambivalent about the whole thing. Yes, it’s nice to predict the tech of the future, but it’s not so nice to get the timing so very wrong. Oh well…back to work.

cheers

Meeks

 

 

 


A smelly but good news tech post

Apologies if this puts anyone off, but I’m really excited by this innovative way of dealing with sewage. Not only does it make something useful out of a big, smelly problem, it does so in a ‘relatively’ small space. [Conventional sewage works take up acres and acres and acres of land that could be used for other things].

To read how this innovative approach actually works, please read the article on New Atlas:

http://newatlas.com/mimic-nature-sewage-oil/46260/?li_source=LI&li_medium=default-widget

As a sci-fi writer I’m interested in all kinds of futuristic world building and one of my earliest ideas was for an ‘undercity’ built to replace much of Melbourne, post sea level rises that drown the lower reaches. Obviously, the new undercity would have to be built on much higher ground to avoid being drowned as well, but it would have lots of big advantages – temperature would remain more or less constant, bushfires would no longer be a danger and the land above the city could be used for productive agriculture. [At the moment, all Australian cities spread outward and our suburbs are built on land that would be better used for the growing of food].

One major problem with this undercity, however, was the issue of waste. I imagined food waste being ‘eaten’ by the SL’ick [synthetic life chickens that look like huge worms made of chicken breast meat], but I simply could not come up with an innovative way of dealing with the body wastes we humans produce. Until now. One small step for my world of the future, one large step for waste management. 🙂

cheers

Meeks


#3D printing on a LARGE scale

I wouldn’t be much of a sci-fi writer if I didn’t keep up with technology, so I’ve had a love affair with 3D printing since I first heard about it, but the technology is changing so fast, I’m constantly being surprise. This is my surprise for the day:

Those are actual, standard sized structures, printed by huge machines. But, as if that were not surprise enough, the material used to build them is made out of a combination of industrial waste and cement, so it’s recycling on top of everything else.

Colour me gobsmacked.

The video below is an animation of how the process is supposed to work:

The video goes for almost five minutes, but the music is pretty and I couldn’t stop watching. I work with words, ideas and computers, so I’m fascinated by this technology, but I can’t help wondering about those whose jobs will be made obsolete by 3D printing. What of them?

If I had a crystal ball, I’d say that some of the manual workers of the world will become artisan crafts people – I think there will always be a demand for crafts – but only a small percent of builders and brickies labourers will be able to make that transition. What of the rest?

I think our whole way of thinking about work is going to have to change. Any thoughts?

cheers

Meeks

 

 

 


Augmented Reality – it’s just around the corner

Vuzix knows that people don’t want to be embarrassed when they put something on their face. So the company is working hard to ship a pair of augmented reality smartglasses this year that will be thin enough to wear comfortably. The Rochester, N.Y.-based company unveiled its latest models, the Blade 3000 smart Sunglasses and the…

via Vuzix aims to ship thin augmented reality smartglasses in 2017 — VentureBeat

In Innerscape, Episode 5, I write about the NCTU agent following a digitally projected ‘map’ to his destination. In the trailer above, the guy wearing the AR smart glasses does the same thing. The details are obviously different, but the concept is the same. I am so chuffed. 😀

cheers

Meeks


#Solar powered micro-grid + #Tesla batteries = the future?

Just found this amazing article on New Atlas. It concerns a small island being powered almost exclusively by a micro-grid made up of solar panels and Tesla batteries. The batteries can be fully charged in 7 hours and can keep the grid running for 3 days without any sun at all:

Why do I find this so exciting? Distributed systems, that’s why.

“And what’s that?” you ask, eyes glazing over as you speak.

In computing, which is where I first heard the term, a distributed systems is:

a model in which components located on networked computers communicate and coordinate their actions by passing messages.[1] The components interact with each other in order to achieve a common goal.

Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers,[3] which communicate with each other by message passing.[4]

[https://en.wikipedia.org/wiki/Distributed_computing#Introduction]

Okay, okay. Here are some nice, juicy examples instead:

  • the internet,
  • your mobile phone network
  • MMOs [massively multiple player online games] like the one I play,
  • virtual reality communities, and even
  • the search for extra terrestrial intelligence [SETI].

There are heaps more examples I could name, but the point is that all these systems rely on the fact that the power of the group is greater than the power of its individual components. In fact, the world wide web could not exist at all if it had to be run from just one, ginormous computer installation.

So distributed systems can be insanely powerful, but when it comes to powering our cities, we seem to be stuck on the old, top-down model in which one, centralised system provides energy to every component in the system – i.e. to you and me and all our appliances.

Opponents of renewables always cite baseload as the main reason why renewables won’t work in highly developed countries. What they don’t tell you is that to create baseload, they have to create electricity all the time. That means burning fossil fuels all the time and creating pollution all the time.

Centralised power generation also does something else – it concentrates the means for producing this energy in one place, so if there is a malfunction, the whole grid goes down. But that’s not all. If all power is produced in one place, it’s all too easy to strike at that one place to destroy the ‘heart’ of the whole system. It can happen. If you read the whole article on New Atlas, you’ll learn that the supply of diesel to the island was once cut, for months. When the diesel ran out, so did the electricity. Now imagine an act of sabotage that destroys the power supply to a city of millions. It hasn’t happened yet, but I think it’s just a matter of time.

By contrast, distributed processing means that you would have to destroy virtually every component of the system to shut it down completely. A good example of this is our road system. In most areas, if one part of the road is closed for whatever reason, we can still get where we want to go by taking a detour. It may take us a little bit longer, but we get there in the end. Something very similar happens with the internet. Digital information is sent in ‘packets’ which attempt to find the quickest route from point A to point X, usually via point B. However if point B goes down, the packets have multiple alternate routes to get to X. Why should power generation be any less efficient?

In the past, electricity could not be stored, so it had to be generated by big, expensive power plants. That volume of electricity still can’t be stored, but in the future, it may not have to be. I foresee a time when neighbourhoods will become micro-grids, with each house/building contributing to the power needs of the whole neighbourhood. Surplus power generation will be stored in some form of battery system [it doesn’t have to be Tesla batteries, but they obviously work well in distributed systems] to provide power 24 hours a day, 7 days a week. More importantly, the type of micro-grid used could be flexible. Communities living inland with almost constant sunshine would obviously use solar, but seaside communities might use wave power, others might use hydro or geothermal.

But what of industry?

I may be a little optimistic here, but I think that distributed power generation could work for industry as well. Not only could manufacturing plants provide at least some of their own power, via both solar and wind, but they could ‘buy in’ unused power from the city. The city, meanwhile, would not generate power but it’s utilities companies could store excess power in massive flywheels or some other kind of large scale storage device. And finally, if none of that is enough, companies could do what utility companies already do now – they could buy in power from other states.

In this possible future, power generation would be cheaper, cleaner and much, much safer. All that’s required is for the one-size-fits-all mindset to change.

Distributed is the way of the future, start thinking about it today. 🙂

cheers

Meeks


Eye-tracking for VR [virtual reality]

meeka-eyeI just found a really interesting article in my Reader. It’s about eye-tracking technology and its use in [some] games.

The current interface requires a learning curve to use without, imho, much added value. That said, I have to admit I don’t play first person shooters, or the kinds of games where speed and twitch response are key.

There is one area, however, where I can see this technology becoming absolutely vital – and that’s in VR [virtual reality]:

Eye-tracking is critical to a technology called foveated rendering. With it, the screen will fully render the area that your eye is looking at. But beyond your peripheral vision, it won’t render the details that your eye can’t see.

This technique can save an enormous amount of graphics processing power. (Nvidia estimates foveated rendering can reduce graphics processing by up to three times). That is useful in VR because it takes a lot of graphics processing power to render VR images for both of your eyes. VR should be rendered at 90 frames per second in each eye in order to avoid making the user dizzy or sick.

A brief explanation is in order for non-gamers. Currently, there are two ways of viewing a game:

  • from the first person perspective
  • from the third person perspective

In first person perspective, you do not see your own body. Instead, the graphics attempt to present the view you would see if you were actually physically playing the game.

In third person perspective, you ‘follow’ behind your body, essentially seeing your character’s back the whole time. This view has advantages as it allows you to see much more in your ‘peripheral’ vision than you would if you were looking out through your character’s eyes.

In VR, however, the aim is not just to make you see what your character sees, the idea is to make you feel that you are your character. A vision system that mimicked how your eyes work by tracking your actual eye movements would increase immersion by an order of magnitude. And, of course, the computer resources freed up by this more efficient way of rendering would allow the game to create more realistic graphics elsewhere.

You can read the full article here:

https://wordpress.com/read/feeds/26908997/posts/1307290866

I predict that voice recognition and eye tracking are going to become key technologies in the not too distant future, not just for games but for augmented* reality as well.

Have a great Sunday,

Meeks

*Augmented reality does not seek to recreate reality, like VR. It merely projects additional ‘objects’ on top of the reality that’s already there.


A phone I could get excited about

After years of rumors and false starts, both Samsung and LG are preparing to unveil portable devices with folding screens later this year, according to a report in the Korea Herald (via XDA). Samsung is likely to produce 100,000 of the smartphone-cum-tablets in the third quarter, the Herald claims, while LG may manufacture the same…

via Samsung and LG both reportedly launching foldable phones in second half of 2017 — VentureBeat

I currently have a Kindle Fire for ‘reading’ and an old, Samsung Galaxy SII for ‘communicating’. I have checked the internet on the phone – once or twice – but the screen is much too small for comfortable reading. As a result, I use it almost exclusively for calls, EmergencyAus alerts, and as a camera.

If Samsung can give me the convenient size of a phone with the screen realestate of a tablet, I might just jump ship from the Kindle.


#Chatbots – and we need them because…?

Okay, all I know about chatbots is what I’ve been reading on Medium lately, and the frustrating experience of ringing my utility company and being forced to answer the STUPID questions of its chatbot.

You know how it goes. You ring and either have to wait forever for the call to be picked up, or the chatbot answers and asks for your account number when all you want is some general information. Grrrr….

So you dig out a utility bill and spit out the account number, knowing full well that if you get through to a real person they will ask you for the number again anyway.

Then the utility company bot asks you to explain the reason for your call. You grit your teeth and try to think of a one or three word description and e.n.u.n.c.i.a.t.e it as clearly as possible while growling in the back of your throat.

What happens next? The chatbot either mishears you, or simply doesn’t have a response for your particular query and asks if you want to speak to a customer service representative…

-face palm-

Do I want to speak to a real, live person? Oh god…

Anyway, if you look at this infographic from Medium, you will see a comparison between a chatbot ‘conversation’ and the same query via a simple Google search:

chatbots vs google

To me, there is no point in carrying on a long, inane Q&A ‘conversation’ with a chatbot when a word or two is all I need to get all the information I need from Papa Google. But am I just being an elitist nerd?

I rather suspect I am. In fact, I rather suspect that most people who regularly use computers are elitist nerds. Why? Because using a computer is actually a lot harder than learning how to use apps on a smartphone. That is why smartphone use has skyrocketed world wide. It is also the reason some pundits believe the days of the desktop [computer] are over. Why pay so much and have to go through such a steep learning curve to do things a smartphone can do so much easier?

There is a part of me that wants to scream that what a smartphone can do is just a fraction of what a ‘proper’ computer can do, but the words barely form before I get a flash of the early 80’s and the emergence of the personal computer. Back then, PCs were much less powerful than mainframes, and I’m sure a lot of old school programmers could not see why everyone couldn’t just learn FORTRAN or something…

So…smartphones may be to the future what PCs were to the past because they are:

  • cheaper,
  • convenient,
  • portable in a real sense,
  • easy to use, and
  • a growth market

But I hope, truly ruly hope that chatbots are just the toddler stage of a technological progression that will end [?] with real voice recognition and real AI support.

Until then, I’ll stick with old school search engines and my antiquated desktop because…I’m an elitist dinosaur with poor eyesight and a pathological hatred of chatbots.

cheers

Meeks


#SFX, cars and ‘The Blackbird’

I’m not a petrol head, but I do like cars so I could not resist a Gizmag article about how cars are filmed for commercials and movies. This is the ‘Blackbird’ and it is the most amazing car you will never see.

Gotta love CGI [computer generated imagery]. 🙂

Happy weekend,

Meeks


%d bloggers like this: