

Nah, the US is going for the “easy” targets first.
My guess for the order is Greenland, Iceland, Mexico, Canada with other small Middle American countries sprinkled after Mexico and during/after Canada.
I just hope the US implodes first.


Nah, the US is going for the “easy” targets first.
My guess for the order is Greenland, Iceland, Mexico, Canada with other small Middle American countries sprinkled after Mexico and during/after Canada.
I just hope the US implodes first.


Venezuelan prison.


Come on! Don’t insult pigs. They are lovely tasty animals.


Should be the other way around. It’s the kings who want war. Not the soldiers.


I was thinking more about the weapon suppliers maximising profits at the expense of human lives.


There’s a case to be made for defensive wars as well.


But there are aerial drones currently in use that have AI targeting.
You could articulate the multiple reasons why you don’t want to.
Thankfully we have hard rock christmas music here.
Learn to use the word “No” from time to time.


Eventually it will when enough people die.


I’ve expanded the scope of my job. From sysadmin to Information Security Officer etc. etc.


So you’ve never heard of the American Service-Members’ Protection Act, known informally as the Hague Invasion Act enacted in 2002. Which protects American war criminals by invading the Hague if necessary.


Hardly. US has been meddling in our elections for a long time. The meddling just hasn’t been as direct or as kinetic as in your cases.


It is a question of authorship. What I don’t approve of is zero effort AI slop. But the use of CGI in movies is OK because it serves the vision of the director. The usage of samples in music is OK if the use is transformative. Autotuning is pushing it but can be OK if use is limited or transformative. Even AI tools can be OK if authorship remains human. But an end-to-end pipeline of endless soulless AI generated slop is not OK. So it is very a question of degrees. AI generated/authored works should be labelled as such. Also the label should contain the degree to which AI was used. Not simply an either/or tag.


So you’re happy that you are being sold a lie?


Generally the more morally reprehensible a business is the better they pay.
You have to draw the line somewhere. The further from real harm the better.


I dunno. Lemmy.ml is pretty toxic too.


Thanks for this. I also learned more of the context of the conflict.
Why am I not surprised that Nestle is involved in this?