Tag Archives: future

UBI – Universal Basic Income

The difference between a social welfare handout and a universal basic income is that the former is seen as a handout to the hopeless while the latter is an acknowledgement that the jobs provided by the industrial revolution are fast disappearing. And they’re not coming back.


The interesting thing about this article from Futurism is that it suggests a UBI might actually be good for the economy itself, not just for the people displaced by technology.

As a recipient of social welfare myself, I believe that the jobs of the future will be small scale and entrepreneurial. People will provide services to each other based on a local need. In a way, this is exactly what companies like AirBnB and Uber are already doing. In twenty years time though, social media may allow me to request a homemade cake for my birthday and have it baked and delivered by my neighbour down the road.

Such micro-transations could add up to trillions of dollars if everyone did it. But everyone can’t do it [now] because of two things:

  • lots of red tape associated with being a small trader,
  • and a social welfare system that is punitive rather than supportive

I can’t see a UBI being introduced any time soon because the political mindset is simply not there. Politicians have to stop thinking of their citizens as a drain on the government purse before any true change can occur. But at least the idea is gaining ground, if slowly.




Neural lace – Innerscape comes one step closer!

Apologies but I’m high fiving myself like an idiot because of an article I just read in futurism.com:


The whole article is interesting as it attempts to predict the near, medium and long term future of communications technology, but it was this paragraph that made me so happy:

This week, we got our first look at Neuralink, a new company cofounded by Musk with a goal of building computers into our brains by way of “neural lace,” a very early-stage technology that lays on your brain and bridges it to a computer. It’s the next step beyond even that blending of the digital and physical worlds, as human and machine become one.

The only thing I’m sceptical about is the time-frame. Tech that you carry and tech that you ‘wear’ is one thing, but tech that invades your brain is something else entirely. I’m sure there will be some maverick individuals who will ignore the risk and give the neural lace a try, but most of us will not jump in quite so quickly. Think desktop computers and the general public. The vast majority of people who use smartphones now either never learned to use computers properly or never felt comfortable with them – i.e. the gain did not negate the pain.

I think the concept of an in-built, brain-machine interface will be around for quite a while before some tech comes along that will make the interface, safe, painless and most of all, easy.

To me, easy is the operative word because, as a species, we always look for the line of least resistance. I just hope I’m still around when it happens as the next few decades are going to be very interesting indeed. 🙂




-blush- ‘teledildonics’…

You should consider this a tech post with an R rating. You’ve been warned.



Right. This really is a case of sci-fi made obsolete by reality. The image you’re looking at shows a pair of ‘haptic’ gloves at work. They allow the wearer to manipulate elements of a digital environment directly – i.e. no need for a mouse or keyboard or game controller. Essentially, sensors in the glove translate real world movement and pressure into digital movement and pressure.

I knew about these haptic gloves because I’m a gamer, and I like to think about new technologies that make gaming more fun. Not surprisingly then, my sci-fi story, Innerscape, contains many existing technologies, extrapolated into their possible future equivalents. One example is the evolution of the haptic glove into the full body gaming suit. But even modern day technology can be used in all sorts of ways. Most people see web cams and Skype as a useful tool for teleconferencing, or to allow friends to see each other and talk in real time. To the porn industry, however, the same technology is a great way to deliver a lucrative product.

Online porn is not something I know a great deal about, but it’s not something I can ignore, either. I do a lot of research online, and anything of a sexual nature can be bring up unexpected results – e.g., when I researched hermaphrodites for Vokhtah. I quickly learned to phrase my queries with great care, and that awareness informed my prediction that the porn industry would spear-head the development of immersive reality in Innerscape. Yes, I know, pun intended…

Despite this rather pragmatic view of the world, however, I had no idea that a real world company was already selling a primitive version of the immersive porn of my imagined future. What’s even worse, I had no idea that this real world company bears the same name [more or less] as a company I dreamed up for Innerscape.

[SPOILER: Leon lets the Woman in Red into his apartment when he sees that she’s delivering his brand new, top of the range, Real Touch gaming suit.]

The real world company already making haptic devices for the porn industry is called Realtouch Interactive.

I swear I am not making this up. I didn’t know about Realtouch Interactive until just now when I read about the latest developments in ‘haptic gloves’ on New Atlas. Imagine my surprise when the same article included a link to…’teledildonics’.

The link to that article is here:


You can find the link to ‘teledildonics’ yourselves. If you so wish. -cough-

Be warned though, in the article, a male writer test drives the ‘device’, and although the descriptions are not super graphic, they don’t leave too much to the imagination. Included in the article is information about how the company created its own tech in order to sync sight, sound and data. Just as I predicted!

I suppose this is the point at which I should explain why data has to be synced along with sight and sound. The haptic ‘device’ is hooked up to the computer via USB at the user’s end. At the ‘cam girl’ end, a slightly different device allows the professional lady to control the sensations sent to the user’s device. Thus, audio, video and the transfer of this haptic data has to occur at the same time or the effect is ruined.

Long term, however, this very same technology will drive something else I wrote about in Innerscape – teleoperations. This is where the surgeon and the patient are separated by long distances, but the surgeon can still operate via a robotic surgical tool.

I don’t know about you, but I’m feeling kind of shell-shocked. None of this technology was meant to happen for decades, yet here it is in 2017. Clearly, the tech will be enhanced and improved enormously in the coming years, but I still feel rather ambivalent about the whole thing. Yes, it’s nice to predict the tech of the future, but it’s not so nice to get the timing so very wrong. Oh well…back to work.






A smelly but good news tech post

Apologies if this puts anyone off, but I’m really excited by this innovative way of dealing with sewage. Not only does it make something useful out of a big, smelly problem, it does so in a ‘relatively’ small space. [Conventional sewage works take up acres and acres and acres of land that could be used for other things].

To read how this innovative approach actually works, please read the article on New Atlas:


As a sci-fi writer I’m interested in all kinds of futuristic world building and one of my earliest ideas was for an ‘undercity’ built to replace much of Melbourne, post sea level rises that drown the lower reaches. Obviously, the new undercity would have to be built on much higher ground to avoid being drowned as well, but it would have lots of big advantages – temperature would remain more or less constant, bushfires would no longer be a danger and the land above the city could be used for productive agriculture. [At the moment, all Australian cities spread outward and our suburbs are built on land that would be better used for the growing of food].

One major problem with this undercity, however, was the issue of waste. I imagined food waste being ‘eaten’ by the SL’ick [synthetic life chickens that look like huge worms made of chicken breast meat], but I simply could not come up with an innovative way of dealing with the body wastes we humans produce. Until now. One small step for my world of the future, one large step for waste management. 🙂



#3D printing on a LARGE scale

I wouldn’t be much of a sci-fi writer if I didn’t keep up with technology, so I’ve had a love affair with 3D printing since I first heard about it, but the technology is changing so fast, I’m constantly being surprise. This is my surprise for the day:

Those are actual, standard sized structures, printed by huge machines. But, as if that were not surprise enough, the material used to build them is made out of a combination of industrial waste and cement, so it’s recycling on top of everything else.

Colour me gobsmacked.

The video below is an animation of how the process is supposed to work:

The video goes for almost five minutes, but the music is pretty and I couldn’t stop watching. I work with words, ideas and computers, so I’m fascinated by this technology, but I can’t help wondering about those whose jobs will be made obsolete by 3D printing. What of them?

If I had a crystal ball, I’d say that some of the manual workers of the world will become artisan crafts people – I think there will always be a demand for crafts – but only a small percent of builders and brickies labourers will be able to make that transition. What of the rest?

I think our whole way of thinking about work is going to have to change. Any thoughts?






#Solar powered micro-grid + #Tesla batteries = the future?

Just found this amazing article on New Atlas. It concerns a small island being powered almost exclusively by a micro-grid made up of solar panels and Tesla batteries. The batteries can be fully charged in 7 hours and can keep the grid running for 3 days without any sun at all:

Why do I find this so exciting? Distributed systems, that’s why.

“And what’s that?” you ask, eyes glazing over as you speak.

In computing, which is where I first heard the term, a distributed systems is:

a model in which components located on networked computers communicate and coordinate their actions by passing messages.[1] The components interact with each other in order to achieve a common goal.

Distributed computing also refers to the use of distributed systems to solve computational problems. In distributed computing, a problem is divided into many tasks, each of which is solved by one or more computers,[3] which communicate with each other by message passing.[4]


Okay, okay. Here are some nice, juicy examples instead:

  • the internet,
  • your mobile phone network
  • MMOs [massively multiple player online games] like the one I play,
  • virtual reality communities, and even
  • the search for extra terrestrial intelligence [SETI].

There are heaps more examples I could name, but the point is that all these systems rely on the fact that the power of the group is greater than the power of its individual components. In fact, the world wide web could not exist at all if it had to be run from just one, ginormous computer installation.

So distributed systems can be insanely powerful, but when it comes to powering our cities, we seem to be stuck on the old, top-down model in which one, centralised system provides energy to every component in the system – i.e. to you and me and all our appliances.

Opponents of renewables always cite baseload as the main reason why renewables won’t work in highly developed countries. What they don’t tell you is that to create baseload, they have to create electricity all the time. That means burning fossil fuels all the time and creating pollution all the time.

Centralised power generation also does something else – it concentrates the means for producing this energy in one place, so if there is a malfunction, the whole grid goes down. But that’s not all. If all power is produced in one place, it’s all too easy to strike at that one place to destroy the ‘heart’ of the whole system. It can happen. If you read the whole article on New Atlas, you’ll learn that the supply of diesel to the island was once cut, for months. When the diesel ran out, so did the electricity. Now imagine an act of sabotage that destroys the power supply to a city of millions. It hasn’t happened yet, but I think it’s just a matter of time.

By contrast, distributed processing means that you would have to destroy virtually every component of the system to shut it down completely. A good example of this is our road system. In most areas, if one part of the road is closed for whatever reason, we can still get where we want to go by taking a detour. It may take us a little bit longer, but we get there in the end. Something very similar happens with the internet. Digital information is sent in ‘packets’ which attempt to find the quickest route from point A to point X, usually via point B. However if point B goes down, the packets have multiple alternate routes to get to X. Why should power generation be any less efficient?

In the past, electricity could not be stored, so it had to be generated by big, expensive power plants. That volume of electricity still can’t be stored, but in the future, it may not have to be. I foresee a time when neighbourhoods will become micro-grids, with each house/building contributing to the power needs of the whole neighbourhood. Surplus power generation will be stored in some form of battery system [it doesn’t have to be Tesla batteries, but they obviously work well in distributed systems] to provide power 24 hours a day, 7 days a week. More importantly, the type of micro-grid used could be flexible. Communities living inland with almost constant sunshine would obviously use solar, but seaside communities might use wave power, others might use hydro or geothermal.

But what of industry?

I may be a little optimistic here, but I think that distributed power generation could work for industry as well. Not only could manufacturing plants provide at least some of their own power, via both solar and wind, but they could ‘buy in’ unused power from the city. The city, meanwhile, would not generate power but it’s utilities companies could store excess power in massive flywheels or some other kind of large scale storage device. And finally, if none of that is enough, companies could do what utility companies already do now – they could buy in power from other states.

In this possible future, power generation would be cheaper, cleaner and much, much safer. All that’s required is for the one-size-fits-all mindset to change.

Distributed is the way of the future, start thinking about it today. 🙂



Eye-tracking for VR [virtual reality]

meeka-eyeI just found a really interesting article in my Reader. It’s about eye-tracking technology and its use in [some] games.

The current interface requires a learning curve to use without, imho, much added value. That said, I have to admit I don’t play first person shooters, or the kinds of games where speed and twitch response are key.

There is one area, however, where I can see this technology becoming absolutely vital – and that’s in VR [virtual reality]:

Eye-tracking is critical to a technology called foveated rendering. With it, the screen will fully render the area that your eye is looking at. But beyond your peripheral vision, it won’t render the details that your eye can’t see.

This technique can save an enormous amount of graphics processing power. (Nvidia estimates foveated rendering can reduce graphics processing by up to three times). That is useful in VR because it takes a lot of graphics processing power to render VR images for both of your eyes. VR should be rendered at 90 frames per second in each eye in order to avoid making the user dizzy or sick.

A brief explanation is in order for non-gamers. Currently, there are two ways of viewing a game:

  • from the first person perspective
  • from the third person perspective

In first person perspective, you do not see your own body. Instead, the graphics attempt to present the view you would see if you were actually physically playing the game.

In third person perspective, you ‘follow’ behind your body, essentially seeing your character’s back the whole time. This view has advantages as it allows you to see much more in your ‘peripheral’ vision than you would if you were looking out through your character’s eyes.

In VR, however, the aim is not just to make you see what your character sees, the idea is to make you feel that you are your character. A vision system that mimicked how your eyes work by tracking your actual eye movements would increase immersion by an order of magnitude. And, of course, the computer resources freed up by this more efficient way of rendering would allow the game to create more realistic graphics elsewhere.

You can read the full article here:


I predict that voice recognition and eye tracking are going to become key technologies in the not too distant future, not just for games but for augmented* reality as well.

Have a great Sunday,


*Augmented reality does not seek to recreate reality, like VR. It merely projects additional ‘objects’ on top of the reality that’s already there.

#science – the best discoveries are often accidental

The modern world is built from materials our cavewoman ancestors could never have imagined – just think silicon and plastics. But now, thanks to 3D printing, and research into graphene, MIT scientists have discovered a powerful new geometry that will change our world yet again. You see, the geometry that can turn 2D graphene into a usable 3D form works just as well on other materials such as steel and concrete:

To me, however, the most fascinating part of this discovery is that it came about as the by-product of research into something else. Like Marie Curie, who discovered polonium and radium while researching uranium, the MIT scientists did not realise all the other uses for the geometry until after they had created it for graphene.

3D Graphene may or may not become the next you-beaut material, but the geometry used to create it will become the next ‘great thing’. Why? Because it will reduce the cost of manufacturing common materials while simultaneously increasing their strength. Imagine a single span of concrete ‘foam’ that’s capable of bridging an entire river, or cars that can protect their occupants from even the worst of crashes. Or, my personal favourite, how about a dome capable of covering an entire city?

Domes have been a favourite device of science fiction writers for a very long time. We’ve imagined them on distant planets, protecting human colonists from all sorts of dangers. Planet X has a toxic atmosphere? No problem. Just pop up a dome and away you go. Planet Y is an ocean world? Still no problem as domes can be built on the sea bed.

But why travel to distant star systems when domes could be used right here on Earth, to protect us from runaway pollution and climate change?

Unfortunately, the technology to actually build such huge, unsupported domes simply has not existed…until now [maybe].All that’s needed for this next ‘great leap forward’ is the development of manufacturing grade 3D printers capable of producing such materials in quantity.

Given how quickly 3D printers have gone from cutting-edge curiosities to mass produced, ‘domestic’ products, I don’t think we’ll have long to wait.

So excited!


#scifi ? Or the genuine history of a war yet to come?


I have been a fan of author, Chris James for some time. How could I not? He’s a very good sci-fi writer! Anyway, when I read this blog post of his, I was intrigued to say the least.  Read it and see for yourselves:

The Stranger and the Manuscript

ThumbI had the shock of my life a few days ago when I took the dogs for a walk in my local forest, only for a stranger to approach me and address me by name; in fact, by both my author name and my real name.  Much greater shocks were to come later in our brief discussion.

Standing slightly less than average height, the Stranger wore lose-fitting black garb which hid all body contours, and the hood fitted quite tightly over the head and wrapped around it to obscure the chin, mouth and nose.  Only two piercing blue eyes stared out at me.  In addition, the pitch and timbre of the muffled voice gave no indication of this person’s sex; it could’ve been a female with a low voice or a male with a high voice.  He/she spoke in a gender-neutral tone that would shortly become very frustrated with me.

CrazyMy disorientation at being denied clues to this person’s identity was compounded by the reactions from my dogs.  Normally they run and sniff everything in the forest.  Crazy in particular never stops moving for a millisecond, and flies through life with a constant expression of wondrous stupidity on her ugly face (well, they say dogs take after their owners *sigh*).  Now, however, I noticed that both dogs had become still, frozen…

– See more at: http://chrisjamesauthor.com/books/the-stranger-and-the-manuscript/#more-2334

New disruptive tech – aircraft

Just had to share an article about the D-Dalus [a play on words from Daedalus] that appeared on Gizmag, my new, favourite, future-tech site:


The D-Dalus could become one of those ‘why did no one think of this before’ type  of innovations that influence future tech for decades to come.




%d bloggers like this: