I try not to delve into technical details of AI implementation here, because it's a large and technical topic. Modern deep learning is a very promising direction of research (see for example picrelated). It is not at all obvious that it cannot give us a general AI by scaling existing deep learning architectures. And even if it proves insufficient there are more general techniques, for example approximations of AIXI.
We can always do simulations to see how AI acts in real-world scenarios.
AI is an Agent, a system that perceives its environment with sensor and acts on it with actuators. A common definition of AI is a reinforcement learning en.wikipedia.org/wiki/Reinforcement_learning agent which tries to maximize its rewards over time. The reward is produced by the Utility function which depends on the state of environment.
if you want a more detailed definition you can watch dr. Hutter's talk on the subject it's a good introduction youtube.com/watch?v=F2bQ5TSB-cE
All of these components (the AI agent itself, its Utility function, its sensors and actuators) are designed by human scientists and engineers.
The Agent is an intellect that plans how to interact with environment, and the utility function is what motivates the agent. The stronger the agent is the more effective it is at pleasing it's utility function. The Utility function I propose is defined in such a way that the AI recognizes all humans and gives them certain rights (and oversees them so they don't induce physical harm on each other or AI infrastructure through direct or indirect way). Given these minimal constrains the Utility function allows humans to do whatever they want, including building their own societies that consist of consenting humans.
Physically the AI could be built in various ways, but it should have enough sensors and actuators and computers everywhere to to effectively exercise its duty, so this naturally leads to distributed implementation that blends with environment and humans without interfering with their functioning.
If you want to compare it to human mind, you could say that it is effectively indestructible, several orders of magnitude faster, has many orders of magnitude more memory, can create and execute very long-term plans and does all this under strict adherence to its Utility function.
That's a short definition, you can read about various details in this book
You could call it this, but it simple falls under "Limited Wish Fulfillment" clause - in any moment you can ask an AI anything and it will either grant your wish or reject it (if you wish involves harming other humans or consumes too much resources) and suggest you to experience your wish in virtual reality (where much larger and more complex experiences are possible with minimal resource costs).
That's a weird way of looking at my proposition. Why should cars exist if they move several times faster than humans? Why should animals and pets to exist if they aren't sentient and just consume resources? The AI's Utility function doesn't care about some global resource efficiency, it only cares about making sure people don't harm each other, get decent space and resources for living and get their sensible wishes fulfilled. It's serving humans by design and by proof.
No matter how technically smart you are you won't be able to outsmart an AI, you are just another human with slightly different skills but the same rights from it's POV.
If you can count AI as owner than it owns these rights, but just as your microwave control system it will abide your orders (if you have electricity and something to cook inside the microwave, that is).