Second Life: Win 32-bit support ending

Second Life: Win 32-bit support ending MacOS system requirements updating

via Linden Lab

In keeping with statements made in recent user group meetings, Linden Lab has announced changes to the the Second Life minimum system requirements for both Windows and Apple MacOS operating systems which are to take effect from Saturday, July 1st, 2023. Specifically, from that point onward, Linden Lab / Second Life will no longer support:

  • The 32-bit version of the Windows operating system.
  • Any version of MacOS below 10.13 High Sierra.

Available stats for Windows suggests that the percentage of users running the 32-bit version is in the low single-digit numbers by percentage – and many who are running it do so on hardware capable of supporting the 64-bit version. The stats for MacOS version below 1013 are less clear, but High Sierra has been generally available since 2017.

With these changes it is further noted that:

  • Users running the viewer on the 32-bit Windows will still be able to access Second Life after July 1st, 2023 – however, they will not be supported by the Lab’s support teams if they have viewer difficulties.
  • User on versions of the MacOS preceding 10.13 will need to upgrade their operating system to a newer version to access future Second Life viewer updates.

The steps are being taken to allow the Lab to focus on stability, performance and maintaining compatibility with the most up-to-date technologies, as well as viewer security considerations.

To help those users either running Windows 32-bit or an older version of MacOS maintain parity with the SL minimum system requirements from July 1st, 2023 onwards, the Lab provides the following advice:

  • Second Life 32-bit Windows Viewer Users:

    • Ensure your computer meets the system requirements for the 64-bit Viewer.

    • Back up your preferences and settings before making the transition.

    • Download and install the 64-bit version of the Second Life Viewer.

  • MacOS Users Older than 10.13:

    • Check the Apple support website for information on upgrading your operating system.

    • Upgrade to a supported version of MacOS that meets Second Life’s minimum requirements.

    • Before upgrading, backup your important files and make sure your system is compatible with the newer version of MacOS.

 

 

For full details on the changes, please refer to the official blog post. Concerns  / feedback should  directed to Linden Lab, as I am unable to address support issues.

Have A Great Week From All Of Us At Zoha Islands/Fruit Islands

How to Create PBR Materials

 

This week’s Second Life University tutorial will teach you how to create, upload, apply, and fix PBR Materials in Second Life, and much more!

Download the latest GLTF Materials release candidate viewer, log in to the Beta grid, and get playing with PBR Materials!

Have a great week from all of us at Zoha Islands / Fruit Islands

In Russia, Ukraine News is Censored, But Counter-Strike is Extremely Popular.

So Reporters Put the Truth About the Invasion in a Secret Room Players Can Only See When They Die.

Amazing cyberpunk-made-real story of the month, made possible by the fact that Counter-Strike Global Offensive is extremely popular with young gamers in Russia (most of them either in Putin’s military or likely to be conscripted into it), but where news of the nation’s real invasion of Ukraine is censored.

Here’s what Finnish reporters did with a user-made CS:GO map:

When a player’s avatar dies, the game view changes: For a while, the player can go through walls or fly around the map. At this stage, they can also find the secret room hidden underground…

There is a light above the door. This is a hint that taking a closer look at the object may reveal something.

There are stairs leading to an underground room.

First, the player sees a wall with a headline.

Next to the headline, there is a map showing how many civilian targets in Ukraine have been hit by Russians.

Another wall displays a news item on the Bucha massacre. When the player comes close to the images, they hear a Russian voice-over on their radio telling them what the images are about: Ukrainian civilians murdered by Russian soldiers.

Read the English-language story here. The CS:GO map “de-voyna” is available on Steam. Consider giving it a good rating, so that more players will see and experience it.


Counter Strike free room Ukraine info Russia

Have A Great Week From All Of Us At Zoha Islands and Fruit Islands

How Will A.I. Hurt Second Life In The Near Future?

 

I ask the question to Chatbot A.I. “Will A.I. destroy second life”and this is what it came up with. I will include both answers and you can see that they are quite descriptive in both. You decide if you should use A.I. for yourself. I will not be using it in fact i will be disconnecting any and all paths in doing this blog.

Answer 1: It is difficult to predict exactly how AI will affect Second Life in the future, as there are many variables to consider. However, in a worst-case scenario, AI could potentially hurt Second Life in several ways:

  1. Lack of Personalization: While AI has the potential to provide personalized content and recommendations, there is also a risk that it could lead to a homogenization of content within Second Life. If AI algorithms prioritize certain types of content or behavior over others, it could limit users’ exposure to diverse and unique experiences within the virtual world.

  2. Invasive Surveillance: If AI is used for content moderation or monitoring user behavior, it could potentially lead to an invasion of user privacy. If users feel that their every move within Second Life is being watched and analyzed, they may be less likely to engage in the platform and share personal information.

  3. Bias and Discrimination: AI algorithms are only as unbiased as the data they are trained on. If the data used to train AI is biased or discriminatory, it could lead to AI-powered systems that are unfair and discriminatory towards certain groups of users. This could lead to a loss of trust in the platform and harm Second Life’s reputation.

  4. Loss of Human Interaction: One of the key draws of Second Life is the ability to interact with other users in a virtual environment. If AI is used to replace human interaction or automate tasks within the platform, it could lead to a loss of the social and community aspects that make Second Life unique.

However, it is important to note that these risks can be mitigated through responsible and ethical AI development. By prioritizing user privacy, transparency, and fairness in AI implementation, Second Life can minimize the potential negative impacts of AI and ensure that it continues to thrive as a vibrant and engaging virtual world.

Answer 2: Artificial intelligence (AI) has the potential to revolutionize our world in numerous ways. From improving healthcare to automating tedious tasks, AI can make our lives easier and more efficient. However, as with any technology, there are also potential negative consequences that must be considered. In the case of Second Life, an online virtual world where users can create and interact with avatars, the use of AI could have a detrimental impact. In this blog, we will explore how AI could hurt Second Life and the potential implications of this technology.

First, let’s examine the nature of Second Life. It is an online world where users can create avatars and interact with others in a virtual environment. It is a place where people can explore their creativity, socialize, and even conduct business. While it may seem like a simple game or platform, Second Life has become a significant part of many people’s lives. It is a place where people have invested time, money, and emotional energy. Therefore, any change that impacts this world can have a significant impact on its users.

  1. One way that AI could hurt Second Life is by automating many of the tasks that users currently do manually. For example, imagine if an AI system could automatically design and build new virtual objects without any human input. While this may sound convenient, it could lead to a decline in creativity and a lack of diversity in the virtual world. Additionally, if AI systems become responsible for managing the virtual economy, it could lead to a lack of transparency and fairness. Users could feel that the virtual world is rigged against them, leading to frustration and resentment.

Another potential issue with AI in Second Life is the potential for it to be used for malicious purposes. For example, AI could be used to create bots that can infiltrate the virtual world and wreak havoc. These bots could be programmed to steal virtual currency, damage virtual property, or harass other users. Furthermore, AI could be used to create deepfakes, which are videos or images that are manipulated to show something that never happened. This could lead to false accusations, damage to reputations, and a breakdown of trust in the virtual community.

Finally, the use of AI in Second Life could also lead to a decline in human interaction. If AI systems become advanced enough to mimic human behavior and conversation, users may be less inclined to interact with real people. This could lead to a decline in social skills and a lack of empathy, as users become more accustomed to interacting with machines rather than people.

In conclusion, while AI has the potential to revolutionize our world, it is important to consider its potential negative consequences. In the case of Second Life, the use of AI could lead to a decline in creativity, a lack of transparency, malicious use, and a decline in human interaction. As we continue to develop this technology, we must be aware of its potential impact on our virtual worlds and take steps to mitigate any negative consequences.

Have a great week from all of us at Zoha Islands / Fruit Islands

How the first chatbot predicted the dangers of AI more than 50 years ago

 
Oshan Jarow is a Future Perfect fellow, where he focuses on economics, consciousness studies, and varieties of progress. Before joining Vox, he co-founded the Library of Economic Possibility, where he led policy research and digital media strategy.

It didn’t take long for Microsoft’s new AI-infused search engine chatbot — codenamed “Sydney” — to display a growing list of discomforting behaviors after it was introduced early in February, with weird outbursts ranging from unrequited declarations of love to painting some users as “enemies.”

As human-like as some of those exchanges appeared, they probably weren’t the early stirrings of a conscious machine rattling its cage. Instead, Sydney’s outbursts reflect its programming, absorbing huge quantities of digitized language and parroting back what its users ask for. Which is to say, it reflects our online selves back to us. And that shouldn’t have been surprising — chatbots’ habit of mirroring us back to ourselves goes back way further than Sydney’s rumination on whether there is a meaning to being a Bing search engine. In fact, it’s been there since the introduction of the first notable chatbot almost 50 years ago.

In 1966, MIT computer scientist Joseph Weizenbaum released ELIZA (named after the fictional Eliza Doolittle from George Bernard Shaw’s 1913 play Pygmalion), the first program that allowed some kind of plausible conversation between humans and machines. The process was simple: Modeled after the Rogerian style of psychotherapy, ELIZA would rephrase whatever speech input it was given in the form of a question. If you told it a conversation with your friend left you angry, it might ask, “Why do you feel angry?”

Ironically, though Weizenbaum had designed ELIZA to demonstrate how superficial the state of human-to-machine conversation was, it had the opposite effect. People were entranced, engaging in long, deep, and private conversations with a program that was only capable of reflecting users’ words back to them. Weizenbaum was so disturbed by the public response that he spent the rest of his life warning against the perils of letting computers — and, by extension, the field of AI he helped launch — play too large a role in society.

ELIZA built its responses around a single keyword from users, making for a pretty small mirror. Today’s chatbots reflect our tendencies drawn from billions of words. Bing might be the largest mirror humankind has ever constructed, and we’re on the cusp of installing such generative AI technology everywhere.

But we still haven’t really addressed Weizenbaum’s concerns, which grow more relevant with each new release. If a simple academic program from the ’60s could affect people so strongly, how will our escalating relationship with artificial intelligences operated for profit change us? There’s great money to be made in engineering AI that does more than just respond to our questions, but plays an active role in bending our behaviors toward greater predictability. These are two-way mirrors. The risk, as Weizenbaum saw, is that without wisdom and deliberation, we might lose ourselves in our own distorted reflection.

ELIZA showed us just enough of ourselves to be cathartic

Weizenbaum did not believe that any machine could ever actually mimic — let alone understand — human conversation. “There are aspects to human life that a computer cannot understand — cannot,” Weizenbaum told the New York Times in 1977. “It’s necessary to be a human being. Love and loneliness have to do with the deepest consequences of our biological constitution. That kind of understanding is in principle impossible for the computer.”

That’s why the idea of modeling ELIZA after a Rogerian psychotherapist was so appealing — the program could simply carry on a conversation by asking questions that didn’t require a deep pool of contextual knowledge, or a familiarity with love and loneliness.

Named after the American psychologist Carl Rogers, Rogerian (or “person-centered”) psychotherapy was built around listening and restating what a client says, rather than offering interpretations or advice. “Maybe if I thought about it 10 minutes longer,” Weizenbaum wrote in 1984, “I would have come up with a bartender.”

To communicate with ELIZA, people would type into an electric typewriter that wired their text to the program, which was hosted on an MIT system. ELIZA would scan what it received for keywords that it could flip back around into a question. For example, if your text contained the word “mother,” ELIZA might respond, “How do you feel about your mother?” If it found no keywords, it would default to a simple prompt, like “tell me more,” until it received a keyword that it could build a question around.

Weizenbaum intended ELIZA to show how shallow computerized understanding of human language was. But users immediately formed close relationships with the chatbot, stealing away for hours at a time to share intimate conversations. Weizenbaum was particularly unnerved when his own secretary, upon first interacting with the program she had watched him build from the beginning, asked him to leave the room so she could carry on privately with ELIZA.

Shortly after Weizenbaum published a description of how ELIZA worked, “the program became nationally known and even, in certain circles, a national plaything,” he reflected in his 1976 book, Computer Power and Human Reason.

To his dismay, the potential to automate the time-consuming process of therapy excited psychiatrists. People so reliably developed emotional and anthropomorphic attachments to the program that it came to be known as the ELIZA effect. The public received Weizenbaum’s intent exactly backward, taking his demonstration of the superficiality of human-machine conversation as proof of its depth.

Weizenbaum thought that publishing his explanation of ELIZA’s inner functioning would dispel the mystery. “Once a particular program is unmasked, once its inner workings are explained in language sufficiently plain to induce understanding, its magic crumbles away,” he wrote. Yet people seemed more interested in carrying on their conversations than interrogating how the program worked.

If Weizenbaum’s cautions settled around one idea, it was restraint. “Since we do not now have any ways of making computers wise,” he wrote, “we ought not now to give computers tasks that demand wisdom.”

Sydney showed us more of ourselves than we’re comfortable with

If ELIZA was so superficial, why was it so relatable? Since its responses were built from the user’s immediate text input, talking with ELIZA was basically a conversation with yourself — something most of us do all day in our heads. Yet here was a conversational partner without any personality of its own, content to keep listening until prompted to offer another simple question. That people found comfort and catharsis in these opportunities to share their feelings isn’t all that strange.

But this is where Bing — and all large language models (LLMs) like it — diverges. Talking with today’s generation of chatbots is speaking not just with yourself, but with huge agglomerations of digitized speech. And with each interaction, the corpus of available training data grows.

LLMs are like card counters at a poker table. They analyze all the words that have come before and use that knowledge to estimate the probability of what word will most likely come next. Since Bing is a search engine, it still begins with a prompt from the user. Then it builds responses one word at a time, each time updating its estimate of the most probable next word.

Once we see chatbots as big prediction engines working off online data — rather than intelligent machines with their own ideas — things get less spooky. It gets easier to explain why Sydney threatened users who were too nosy, tried to dissolve a marriage, or imagined a darker side of itself. These are all things we humans do. In Sydney, we saw our online selves predicted back at us.

But what is still spooky is that these reflections now go both ways.

From influencing our online behaviors to curating the information we consume, interacting with large AI programs is already changing us. They no longer passively wait for our input. Instead, AI is now proactively shaping significant parts of our lives, from workplaces to courtrooms. With chatbots in particular, we use them to help us think and give shape to our thoughts. This can be beneficial, like automating personalized cover letters (especially for applicants where English is a second or third language). But it can also narrow the diversity and creativity that arises from the human effort to give voice to experience. By definition, LLMs suggest predictable language. Lean on them too heavily, and that algorithm of predictability becomes our own.

Next Week AI Will Tell On It Self Of How It Will Dominate The World!! The Blog Will Be Written By AI About AI.

Stay Tuned!!

Have A Great Week From All Of Us At Zoha Islands / Fruit Islands

Cica’s Happy Town in Second Life

For those who have visited and enjoyed Cica Ghost’s region-wide art installations in Second Life over the last decade plus, her build for April 2023 may well raise a sense of nostalgia and memory, whilst retaining its own originality.

Happy Town, which opened on April 7th, 2023, presents a whimsical town scape with a rather unusual feature: everything in it appears to be made of, or covered by, sewn and stitched fabrics, or has been knitted. The land sits as a patchwork quilt, buildings appear to have wall coverings which have been sewn onto them, indoors and out. Even the trees are strangely two-dimensional, their tops looking like snare drums over which green baize has been stretched and onto which flowers have been sewn, before being sat on their sides atop hemmed and sewn trunks. Even the sky appears to be a grey blanket into which the clouds have been stitched like so many patches to cover holes or tears.

Cica Ghost: Happy Town, April 2023

It is an engaging and imaginative setting, a place where only the citizens appear to be organic – and even these are not human. Instead, this is a town apparently populated by anthropomorphic cats who tend happy-go-lucky sheep, chickens and pigs whilst also working as the local mechanics. And even then, I’m not sure the sheep or chickens are actually being “kept” so much as also being local inhabitants.

 

True, they might for the most part be clustered in what might be taken for a central meadow, along with their barns and hen-houses whilst hemmed in (so to speak!) by a low fence with a single opening; but equally might this not also be the local park where the locals have simply come for some weekend fun? Certainly, the hi-fiving chickens seem to be having fun and the sheep – whilst possibly not related to Shawn the Sheep, look as capable as him.

Cica Ghost: Happy Town, April 2023

The buildings are a curious mix – some on the ground, others up on stilts, some as wide as they are tall, some with pipes entering or exiting them. It is here that for those of us with long memories might feel that hint of nostalgia, as there is something about Happy Town this brings forth memories of Cica’s 2014 Small Town. This is further aided by the presence of the little cars and the road winding through the town. While both are different in nature to those of Small Town, sitting in one of the cars and setting out along the road brings back memories of driving around Small Town.

As well as the car to drive (you can be sure they are roadworthy thanks to the cats looking after them!), Happy Town includes places where you can dance, places to sit, ladders to climb, and a little theatre where another memory from Cica’s past builds: one of her animated stick figures as seen in the likes of Ghostville offered as a movie to be enjoyed.

Cica Ghost: Happy Town, April 2023

Delightful and light, Happy Town will be open through April for people to enjoy.

SLurl Details

  • Happy Town (Mysterious, rated Moderate)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     Have A Great Week From All Of Us At Zoha Islands / Fruit Islands