Category Archives: technology

The birth of cybernetics?

Doctor Who fans will immediately recognize the concept of the ‘Cyberman’, but for everyone else, it’s a being that evolved from a biological base into a fusion of ‘meat + machine’.

In the Doctor Who series, the Cybermen are more machine than meat, but the concept stays the same. And it’s been a recurring theme in science fiction for decades. Anyone remember a TV show called the 64 Million Dollar Man?

But that’s all just make believe…isn’t it?

Well, no, no it’s not, not any more. Welcome to the world of David Eagleman. If you have any interest in what makes all life on earth tick, you will find this TED talk absolutely rivetting:

Did you watch it? Did it blow you away? Yeah, me too. 😀

There were a number of things in that talk that made me nod like crazy, but two really stood out:

  • the brain is a general purpose computing device, and
  • the concept of sensory substitution

As someone interested in biology, I sort of knew about the parts of the brain and how they functioned, but until quite recently, I assumed that brain plasticity [the ability of the brain to change itself when necessary] was restricted to fairly ‘small’ functions. And then I heard about Daniel Kish. He has no eyes, so everything you see him do, he does without using the physical pathways you or I use when we ‘see’ things. Instead, he makes clicking sounds and ‘hears’ them bounce off objects in their path:

Daniel Kish is an example of biological sensory substitution because he uses his hearing to provide data to the brain which the brain then interprets as a kind of vision. It’s real, it can be done, it’s just that most of the time, we humans prefer to use the easy path we learned as babies.

Just as a matter of interest, did you know that the visual cortex of a newborn baby is ‘unfinished’? Steropsis, or

The perception of depth produced by the reception in the brain of visual stimuli from both eyes in combination; binocular vision

https://en.oxforddictionaries.com/definition/stereopsis

is ‘learned’ in the first 18 months of a baby’s life. If something happens to disrupt this learning process, binocular vision will not develop. Instead, the child will learn how to see 3D using a process called ‘motion parallax’. I know, because that’s how I see, and I can play pretty fast and furious table tennis. 😀

The more I learn about the world, the more amazed I become at its incredible power. Is it any wonder I’m a sci-fi nut?

Special thanks to Museworthyman for pointing me towards that mind-blowing TED talk. Kindred spirits unite!

cheers

Meeks

 


-blush- ‘teledildonics’…

You should consider this a tech post with an R rating. You’ve been warned.

haptic-glove-2

http://fab.cba.mit.edu/classes/863.11/people/daniel.rosenberg/pf.html

Right. This really is a case of sci-fi made obsolete by reality. The image you’re looking at shows a pair of ‘haptic’ gloves at work. They allow the wearer to manipulate elements of a digital environment directly – i.e. no need for a mouse or keyboard or game controller. Essentially, sensors in the glove translate real world movement and pressure into digital movement and pressure.

I knew about these haptic gloves because I’m a gamer, and I like to think about new technologies that make gaming more fun. Not surprisingly then, my sci-fi story, Innerscape, contains many existing technologies, extrapolated into their possible future equivalents. One example is the evolution of the haptic glove into the full body gaming suit. But even modern day technology can be used in all sorts of ways. Most people see web cams and Skype as a useful tool for teleconferencing, or to allow friends to see each other and talk in real time. To the porn industry, however, the same technology is a great way to deliver a lucrative product.

Online porn is not something I know a great deal about, but it’s not something I can ignore, either. I do a lot of research online, and anything of a sexual nature can be bring up unexpected results – e.g., when I researched hermaphrodites for Vokhtah. I quickly learned to phrase my queries with great care, and that awareness informed my prediction that the porn industry would spear-head the development of immersive reality in Innerscape. Yes, I know, pun intended…

Despite this rather pragmatic view of the world, however, I had no idea that a real world company was already selling a primitive version of the immersive porn of my imagined future. What’s even worse, I had no idea that this real world company bears the same name [more or less] as a company I dreamed up for Innerscape.

[SPOILER: Leon lets the Woman in Red into his apartment when he sees that she’s delivering his brand new, top of the range, Real Touch gaming suit.]

The real world company already making haptic devices for the porn industry is called Realtouch Interactive.

I swear I am not making this up. I didn’t know about Realtouch Interactive until just now when I read about the latest developments in ‘haptic gloves’ on New Atlas. Imagine my surprise when the same article included a link to…’teledildonics’.

The link to that article is here:

http://newatlas.com/flex-n-feel-glove-long-distance-relationships/47900/?utm_source=Gizmag+Subscribers&utm_campaign=f1f477b260-UA-2235360-4&utm_medium=email&utm_term=0_65b67362bd-f1f477b260-92416841

You can find the link to ‘teledildonics’ yourselves. If you so wish. -cough-

Be warned though, in the article, a male writer test drives the ‘device’, and although the descriptions are not super graphic, they don’t leave too much to the imagination. Included in the article is information about how the company created its own tech in order to sync sight, sound and data. Just as I predicted!

I suppose this is the point at which I should explain why data has to be synced along with sight and sound. The haptic ‘device’ is hooked up to the computer via USB at the user’s end. At the ‘cam girl’ end, a slightly different device allows the professional lady to control the sensations sent to the user’s device. Thus, audio, video and the transfer of this haptic data has to occur at the same time or the effect is ruined.

Long term, however, this very same technology will drive something else I wrote about in Innerscape – teleoperations. This is where the surgeon and the patient are separated by long distances, but the surgeon can still operate via a robotic surgical tool.

I don’t know about you, but I’m feeling kind of shell-shocked. None of this technology was meant to happen for decades, yet here it is in 2017. Clearly, the tech will be enhanced and improved enormously in the coming years, but I still feel rather ambivalent about the whole thing. Yes, it’s nice to predict the tech of the future, but it’s not so nice to get the timing so very wrong. Oh well…back to work.

cheers

Meeks

 

 

 


Beautiful, beautiful, beautiful!

Website : Kkaa.co.jp

via Tsubomi Villas by Kengo Kuma — Mega Luxus

I have always loved the inspired simplicity of Japanese art and design, but this one really does take my breath away. Curves are the basic building blocks of nature, not straight lines, but I cannot begin to imagine how much work went into creating this organic, deceptively simple shape. Pure perfection.


A smelly but good news tech post

Apologies if this puts anyone off, but I’m really excited by this innovative way of dealing with sewage. Not only does it make something useful out of a big, smelly problem, it does so in a ‘relatively’ small space. [Conventional sewage works take up acres and acres and acres of land that could be used for other things].

To read how this innovative approach actually works, please read the article on New Atlas:

http://newatlas.com/mimic-nature-sewage-oil/46260/?li_source=LI&li_medium=default-widget

As a sci-fi writer I’m interested in all kinds of futuristic world building and one of my earliest ideas was for an ‘undercity’ built to replace much of Melbourne, post sea level rises that drown the lower reaches. Obviously, the new undercity would have to be built on much higher ground to avoid being drowned as well, but it would have lots of big advantages – temperature would remain more or less constant, bushfires would no longer be a danger and the land above the city could be used for productive agriculture. [At the moment, all Australian cities spread outward and our suburbs are built on land that would be better used for the growing of food].

One major problem with this undercity, however, was the issue of waste. I imagined food waste being ‘eaten’ by the SL’ick [synthetic life chickens that look like huge worms made of chicken breast meat], but I simply could not come up with an innovative way of dealing with the body wastes we humans produce. Until now. One small step for my world of the future, one large step for waste management. 🙂

cheers

Meeks


#3D printing on a LARGE scale

I wouldn’t be much of a sci-fi writer if I didn’t keep up with technology, so I’ve had a love affair with 3D printing since I first heard about it, but the technology is changing so fast, I’m constantly being surprise. This is my surprise for the day:

Those are actual, standard sized structures, printed by huge machines. But, as if that were not surprise enough, the material used to build them is made out of a combination of industrial waste and cement, so it’s recycling on top of everything else.

Colour me gobsmacked.

The video below is an animation of how the process is supposed to work:

The video goes for almost five minutes, but the music is pretty and I couldn’t stop watching. I work with words, ideas and computers, so I’m fascinated by this technology, but I can’t help wondering about those whose jobs will be made obsolete by 3D printing. What of them?

If I had a crystal ball, I’d say that some of the manual workers of the world will become artisan crafts people – I think there will always be a demand for crafts – but only a small percent of builders and brickies labourers will be able to make that transition. What of the rest?

I think our whole way of thinking about work is going to have to change. Any thoughts?

cheers

Meeks

 

 

 


Augmented Reality – it’s just around the corner

Vuzix knows that people don’t want to be embarrassed when they put something on their face. So the company is working hard to ship a pair of augmented reality smartglasses this year that will be thin enough to wear comfortably. The Rochester, N.Y.-based company unveiled its latest models, the Blade 3000 smart Sunglasses and the…

via Vuzix aims to ship thin augmented reality smartglasses in 2017 — VentureBeat

In Innerscape, Episode 5, I write about the NCTU agent following a digitally projected ‘map’ to his destination. In the trailer above, the guy wearing the AR smart glasses does the same thing. The details are obviously different, but the concept is the same. I am so chuffed. 😀

cheers

Meeks


#Solar powered micro-grid + #Tesla batteries = the future?

Just found this amazing article on New Atlas. It concerns a small island being powered almost exclusively by a micro-grid made up of solar panels and Tesla batteries. The batteries can be fully charged in 7 hours and can keep the grid running for 3 days without any sun at all:

Why do I find this so exciting? Distributed systems, that’s why.

“And what’s that?” you ask, eyes glazing over as you speak.

In computing, which is where I first heard the term, a distributed systems is:

a model in which components located on networked computers communicate and coordinate their actions by passing messages.[1] The components interact with each other in order to achieve a common goal.

Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers,[3] which communicate with each other by message passing.[4]

[https://en.wikipedia.org/wiki/Distributed_computing#Introduction]

Okay, okay. Here are some nice, juicy examples instead:

  • the internet,
  • your mobile phone network
  • MMOs [massively multiple player online games] like the one I play,
  • virtual reality communities, and even
  • the search for extra terrestrial intelligence [SETI].

There are heaps more examples I could name, but the point is that all these systems rely on the fact that the power of the group is greater than the power of its individual components. In fact, the world wide web could not exist at all if it had to be run from just one, ginormous computer installation.

So distributed systems can be insanely powerful, but when it comes to powering our cities, we seem to be stuck on the old, top-down model in which one, centralised system provides energy to every component in the system – i.e. to you and me and all our appliances.

Opponents of renewables always cite baseload as the main reason why renewables won’t work in highly developed countries. What they don’t tell you is that to create baseload, they have to create electricity all the time. That means burning fossil fuels all the time and creating pollution all the time.

Centralised power generation also does something else – it concentrates the means for producing this energy in one place, so if there is a malfunction, the whole grid goes down. But that’s not all. If all power is produced in one place, it’s all too easy to strike at that one place to destroy the ‘heart’ of the whole system. It can happen. If you read the whole article on New Atlas, you’ll learn that the supply of diesel to the island was once cut, for months. When the diesel ran out, so did the electricity. Now imagine an act of sabotage that destroys the power supply to a city of millions. It hasn’t happened yet, but I think it’s just a matter of time.

By contrast, distributed processing means that you would have to destroy virtually every component of the system to shut it down completely. A good example of this is our road system. In most areas, if one part of the road is closed for whatever reason, we can still get where we want to go by taking a detour. It may take us a little bit longer, but we get there in the end. Something very similar happens with the internet. Digital information is sent in ‘packets’ which attempt to find the quickest route from point A to point X, usually via point B. However if point B goes down, the packets have multiple alternate routes to get to X. Why should power generation be any less efficient?

In the past, electricity could not be stored, so it had to be generated by big, expensive power plants. That volume of electricity still can’t be stored, but in the future, it may not have to be. I foresee a time when neighbourhoods will become micro-grids, with each house/building contributing to the power needs of the whole neighbourhood. Surplus power generation will be stored in some form of battery system [it doesn’t have to be Tesla batteries, but they obviously work well in distributed systems] to provide power 24 hours a day, 7 days a week. More importantly, the type of micro-grid used could be flexible. Communities living inland with almost constant sunshine would obviously use solar, but seaside communities might use wave power, others might use hydro or geothermal.

But what of industry?

I may be a little optimistic here, but I think that distributed power generation could work for industry as well. Not only could manufacturing plants provide at least some of their own power, via both solar and wind, but they could ‘buy in’ unused power from the city. The city, meanwhile, would not generate power but it’s utilities companies could store excess power in massive flywheels or some other kind of large scale storage device. And finally, if none of that is enough, companies could do what utility companies already do now – they could buy in power from other states.

In this possible future, power generation would be cheaper, cleaner and much, much safer. All that’s required is for the one-size-fits-all mindset to change.

Distributed is the way of the future, start thinking about it today. 🙂

cheers

Meeks


#Solar still – cheap and efficient

Yet another example of solar technology surging ahead for use in under-developed countries. This particular device is super efficient at distilling pure water from contaminated or salt water:

Rather than heating the bulk of a body of water, the new device focuses its energy on just the surface water, which evaporated at 44° C (111° F). That allows the still to reach a reported efficiency of 88 percent, which the team believes is a record for thermal efficiency. As a result, the device could produce between 3 and 10 liters (0.8 and 2.6 US gal) of purified water per day, compared to the 1 to 5 liters (0.3 to 1.3 US gal) per day possible with most commercial stills of comparable size currently available.

The device also does something else, it provides self-sufficiency:

“The solar still we are developing would be ideal for small communities, allowing people to generate their own drinking water much like they generate their own power via solar panels on their house roof,” says Zhejun Liu, co-author of the study.

http://newatlas.com/inexpensive-efficient-solar-still/47652/

I live in a big city with all the amenities required for modern living, but a part of me longs to go off grid. Ah well, maybe one day. 🙂

cheers

Meeks


Eye-tracking for VR [virtual reality]

meeka-eyeI just found a really interesting article in my Reader. It’s about eye-tracking technology and its use in [some] games.

The current interface requires a learning curve to use without, imho, much added value. That said, I have to admit I don’t play first person shooters, or the kinds of games where speed and twitch response are key.

There is one area, however, where I can see this technology becoming absolutely vital – and that’s in VR [virtual reality]:

Eye-tracking is critical to a technology called foveated rendering. With it, the screen will fully render the area that your eye is looking at. But beyond your peripheral vision, it won’t render the details that your eye can’t see.

This technique can save an enormous amount of graphics processing power. (Nvidia estimates foveated rendering can reduce graphics processing by up to three times). That is useful in VR because it takes a lot of graphics processing power to render VR images for both of your eyes. VR should be rendered at 90 frames per second in each eye in order to avoid making the user dizzy or sick.

A brief explanation is in order for non-gamers. Currently, there are two ways of viewing a game:

  • from the first person perspective
  • from the third person perspective

In first person perspective, you do not see your own body. Instead, the graphics attempt to present the view you would see if you were actually physically playing the game.

In third person perspective, you ‘follow’ behind your body, essentially seeing your character’s back the whole time. This view has advantages as it allows you to see much more in your ‘peripheral’ vision than you would if you were looking out through your character’s eyes.

In VR, however, the aim is not just to make you see what your character sees, the idea is to make you feel that you are your character. A vision system that mimicked how your eyes work by tracking your actual eye movements would increase immersion by an order of magnitude. And, of course, the computer resources freed up by this more efficient way of rendering would allow the game to create more realistic graphics elsewhere.

You can read the full article here:

https://wordpress.com/read/feeds/26908997/posts/1307290866

I predict that voice recognition and eye tracking are going to become key technologies in the not too distant future, not just for games but for augmented* reality as well.

Have a great Sunday,

Meeks

*Augmented reality does not seek to recreate reality, like VR. It merely projects additional ‘objects’ on top of the reality that’s already there.


#science – the best discoveries are often accidental

The modern world is built from materials our cavewoman ancestors could never have imagined – just think silicon and plastics. But now, thanks to 3D printing, and research into graphene, MIT scientists have discovered a powerful new geometry that will change our world yet again. You see, the geometry that can turn 2D graphene into a usable 3D form works just as well on other materials such as steel and concrete:

To me, however, the most fascinating part of this discovery is that it came about as the by-product of research into something else. Like Marie Curie, who discovered polonium and radium while researching uranium, the MIT scientists did not realise all the other uses for the geometry until after they had created it for graphene.

3D Graphene may or may not become the next you-beaut material, but the geometry used to create it will become the next ‘great thing’. Why? Because it will reduce the cost of manufacturing common materials while simultaneously increasing their strength. Imagine a single span of concrete ‘foam’ that’s capable of bridging an entire river, or cars that can protect their occupants from even the worst of crashes. Or, my personal favourite, how about a dome capable of covering an entire city?

Domes have been a favourite device of science fiction writers for a very long time. We’ve imagined them on distant planets, protecting human colonists from all sorts of dangers. Planet X has a toxic atmosphere? No problem. Just pop up a dome and away you go. Planet Y is an ocean world? Still no problem as domes can be built on the sea bed.

But why travel to distant star systems when domes could be used right here on Earth, to protect us from runaway pollution and climate change?

Unfortunately, the technology to actually build such huge, unsupported domes simply has not existed…until now [maybe].All that’s needed for this next ‘great leap forward’ is the development of manufacturing grade 3D printers capable of producing such materials in quantity.

Given how quickly 3D printers have gone from cutting-edge curiosities to mass produced, ‘domestic’ products, I don’t think we’ll have long to wait.

So excited!

Meeks


%d bloggers like this: