Knowledge is awareness of information. "Awareness" is a quagmire because a lot of people believe that 'true' awareness requires possessing a sort of soul which machines can't possess.
I think the important part is information, the matter of 'awareness' can simply be ignored as a philosophical/religious disagreement which will never be resolved. What's important is: Does the system contain information? Can it reliably convey that information? In which ways can it manipulate that information?
"Awareness of information" describes belief. Knowledge is justified, true belief (you can believe things you don't actually know/don't have justification for, and you can be made aware of information you don't believe). If you're dismissive of philosophy and then ask epistemological questions, you'll miss out on a lot of good pondering people have done on the subject, and end up reinventing some of it without encountering the criticism of those ideas.
2. "Belief" isn't any less of a quagmire than "awareness." Materialists and dualists will never agree on whether machines can have 'belief' or 'awareness', so discussions between the two will always be fruitless.
If you're asking explicitly epistemogical questions, knowledge as in "do you have knowledge of the events of last night" is probably not the definition you want. You probably want want the definition as in, "what is knowledge, what do I know, and how do I know it?" (Note the definition I used is also there.)
You're asking questions and then declaring the answers impossible to determine, I don't really see the point. You don't really avoid the question of belief in the line of questioning you propose. It just gets implicitly shifted into the observer.
Personally I don't care about whether this paradigm will ever reconcile with that one, I care about which I think is most appropriate to a given problem space.
> You're asking questions and then declaring the answers impossible to determine
If you mean to say that I've asked whether machines can know or believe, you're wrong. I have not asked whether it's possible for machines to 'know' or 'believe'. What I have asserted, not asked, is these questions are a waste of your time, because the divide between materialists and duelists will never be bridged. The the root of the disagreement is an irreparable philosophical divide, essentially religious disagreement.
To reiterate for clarity, these are the questions which I said are relevant: "Does the system contain information? Can it reliably convey that information? In which ways can it manipulate that information?" I haven't declared these questions impossible to answer. On the contrary, these are questions for engineers, not philosophers or theologians. They are mundane, practical questions:
The system is a spreadsheet: Does it contain information? Yes, assuming it isn't blank. Even a fraudulent spreadsheet contains information, false as it my be. Can it convey that information? Yes, given appropriate spreadsheet software and a user who knows how to use it. Can it manipulate that information? Certainly, a spreadsheet can sort, sum, etc.
The system is an AI: Can it contain information? Yes, plenty of information is fed into them during training. Can it reliably convey that information? That depends on the degree of reliability you desire. Can it manipulate that information? Yes, numerous kinds of manipulations have been demonstrated. The reliability of information conveyance and the manner of manipulations which are possible are important questions for any engineer who is thinking about creating or employing such a system. The answers to these questions are not impossible to determine.
But can an AI "know" things? Pointless question, like asking if a submarine can "swim". Important questions about submarines include: How deep can it go? How fast can it go? How quiet is it? These are questions for which empirical answers can be determined. Whether a submarine can "swim" is a pointless question, all it does is interrogate how much anthropocentric baggage the word "swim" has. Maybe that's an interesting question to linguists, poets or philosophers, but it isn't an important question to engineers trying to solve a real problems.
I know swimming submarines are cliche so here's another: Can a seat-belt hug you? That's a stupid question for poets or linguists who want to interrogate the anthropocentric implications of the word 'hug'. Can a seat-belt restrain you? That's a useful question for automotive engineers who want to build a car.
I think the important part is information, the matter of 'awareness' can simply be ignored as a philosophical/religious disagreement which will never be resolved. What's important is: Does the system contain information? Can it reliably convey that information? In which ways can it manipulate that information?