I don't think there is an "easy" answer to this, perhaps explains the meagre responses so far for such a core question. Although there are companies offering automated configuration checks of some cloud services, it will be very hard for 3rd parties to create and maintain such configuration information.
I'd suggest these are "interesting models" of what you would want :
(1) In ACSC IRAP the audit reports include caveats required in the configuration of the certified service to bring it up to the Australian Government's Information Security Manual standard required. For example the auditors recommended enabling encryption with various AWS EC2 related services when I reviewed the AWS report (AWS are very open about with the contents of AWS Artifact, more so than the various certifications require AWS to share).(2) I would also note that the Microsoft guidance for handling UK Information at "OFFICIAL" in O365 which was done a few years back was also a good example of guidance.
There are various aspects of these sources of configuration advice that make this advice possible.
(a) Focus on the client's need not the suppliers need for security (CSA STAR seeks to do this too).
(b) A clear baseline (Austalian ISM, OFFICIAL handling requirements) or certification/compliance goal to meet. This acts as a "substitute" for a threat model, whilst placing restrictions on the sensitivity of material which is covered in theory caps the total risk exposure (UK OFFICIAL covers basically the day to day working documents of the UK public sector that aren't suitable to be public, but not material relating to sensitive military or law enforcement activity) . As without a compliance style goal, other documents might end up caveat-ed with "if X is a concern do Y". NCSC have tried to use principles to drive this and encourage users to think about security, but whilst it might lead to sophisticated users, I'm assuming you are after advice with more technical "meat", or more fleshed out than the 14 principles.
(c) Governments were paying for it. Not sure this is strictly necessary, but public sector constraints may make it easier both to justify the expense to protect data about members of the public, and provides a certain amount of scale (the UK government has a lot of O365 users, although I was throwing these documents at a lot of people who should have read it already in a previous role, so not clear if the UK maximised the value they got from their engagement with Microsoft Professional Services, but they tried). Either way the take away is there is a clear way to finance the production of the advice.
(d) Supplier capability & engagement. It requires a certain amount of sophistication on the part of suppliers. IRAP had only certified 21 services when I was reviewing it, this likely reflects the expense and work involved in certifying the services, but getting to the level where the supplier can assist in hardening, or admit when a service is not suitable, requires maturity in their approach to security. I believe supplier engagement is really key (not sure how this works with CIS), Amazon AWS are the only people who can supply enough detail of their encryption processes to allow an assessment to say if their encryption service is suitable for any given level of threat. But as always a critical external eye is needed, in case they overlook, or embellish as aspect of their security stance.
IRAP has an ongoing review process built in, this would clearly be needed to keep guidance current.
If I think of AWS, or O365, the complexity of the offerings makes it really hard to generalise. I can see a clear case that O365 users want the more expensive license for a CIS "best practise" guide, but some of the Microsoft guidance the NCSC reference is clearly focused on a "second best" licence, although Microsoft will sell some of the services covered in their more expensive licenses as an add-on to lower tier licensees. I think there are separate issue with over complex security tiers in products, but that is another discussion, suffice to say I like Google's simplicity in this regard, where you can opt for things like mandating partner's email servers have valid TLS certificates, but Google basically provide identifying malicious activity as part of their core offering. But that aside the complexity in these offering makes configuration advice more complex, and the cost model is very different from say employee time spent hardening on premise equipment to CIS benchmarks.
Within something like the CSA CCM/CAIQ type approach, I could imagine one could ask audited service to note where they have optional features, or configuration, that are relevant to a given control or group of controls. That could be recorded in the matrix, and that would provide a method one could refresh them on a regular basis to ensure it is still current and appropriate.
On a more immediately practical or more urgent basis, in a previous role, I maintained brief security pages for both suppliers (including Cloud suppliers) and for products and dependencies used in that companies solutions or software products. It was maintained in a Wiki (intentionally easy to both read and maintain by other employees or contractors, but with clear audit log, and notification of changes). However the emphasis was always on linking out to public sources of information I had found on the security. As you want the least possible content to maintain in-house. Indeed where ever possible we would update Wikipedia, or contribute to the security advice from projects themselves, rather than produce in-house advice. Since that gave us the ability to get assistance outside our own resources in maintaining it. One could take the view that we should perhaps consider maintaining Wikipedia, and that the most effective way to share knowledge in a communal fashion is probably to ensure there is a "security" section on all Cloud Service Wikipedia pages which list incidents of note, vulnerabilities of note, and notes where advice on secure use and configuration may be found, which is basically what I recorded internally, along with details on internal use of products. Since Wikipedia forbids original research, it might make sense to link back to resources like the CSA threats reports, which are a good source of material on past incidents
The Wikipedia team type approach is a well trodden path. There is a computer WikiProject that aims to document aspects of computing on Wikipedia including "security", but not sure how the community aspects there work.
Taking a look AWS EC2 Wikipedia article has a little information on incidents affecting EC2, but precious little on preventing them. So would seem ripe for a refresh. The ban on "original research" means we can only link existing research, and give ideas backed by solid evidence, but if anything that is probably useful discipline rather than a barrier to communication. If I'm inspired I'll give it a go and see if the "revert bots" or Amazon leap in to try and undo the changes.
The page is ripe for some other tidy up, as it still reads a little like sales material and not as encyclopedic as it should. There are some objections about it being a brand, or product, but I don't think that is appropriate, I dare say Microsoft Windows and Hoover have pages, I think notability is what matters, and being approved by key government security certifications, and sheer scale of impact mean that issues with EC2 escalate.